Created
September 17, 2020 07:25
-
-
Save rafael-brandao/432c60c87a01a098b246fb95407a1bba to your computer and use it in GitHub Desktop.
k3s container log dump
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
time="2020-09-17T06:40:34.488298218Z" level=info msg="Starting k3s v1.18.6+k3s1 (6f56fa1d)" | |
time="2020-09-17T06:40:35.141344995Z" level=info msg="Kine listening on unix://kine.sock" | |
time="2020-09-17T06:40:35.339480114Z" level=info msg="Active TLS secret (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.27.0.2:172.27.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:975a58fa049171a3cd99dc91a2a922ec83aeee121b28cf3ca23262809bdf0215]" | |
time="2020-09-17T06:40:35.347943261Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" | |
Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments. | |
I0917 06:40:35.348389 6 server.go:645] external host was not specified, using 172.27.0.2 | |
I0917 06:40:35.348641 6 server.go:162] Version: v1.18.6+k3s1 | |
I0917 06:40:35.982525 6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I0917 06:40:35.982546 6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I0917 06:40:35.983619 6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I0917 06:40:35.983633 6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I0917 06:40:36.006664 6 master.go:270] Using reconciler: lease | |
I0917 06:40:36.055554 6 rest.go:113] the default service ipfamily for this cluster is: IPv4 | |
W0917 06:40:36.385452 6 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. | |
W0917 06:40:36.394378 6 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. | |
W0917 06:40:36.405721 6 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
W0917 06:40:36.422075 6 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
W0917 06:40:36.425266 6 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
W0917 06:40:36.440433 6 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
W0917 06:40:36.462145 6 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. | |
W0917 06:40:36.462172 6 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. | |
I0917 06:40:36.472523 6 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. | |
I0917 06:40:36.472544 6 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. | |
I0917 06:40:38.501293 6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt | |
I0917 06:40:38.501335 6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt | |
I0917 06:40:38.501541 6 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key | |
I0917 06:40:38.502001 6 secure_serving.go:178] Serving securely on 127.0.0.1:6444 | |
I0917 06:40:38.502078 6 controller.go:81] Starting OpenAPI AggregationController | |
I0917 06:40:38.502110 6 tlsconfig.go:240] Starting DynamicServingCertificateController | |
I0917 06:40:38.502200 6 crd_finalizer.go:266] Starting CRDFinalizer | |
I0917 06:40:38.502308 6 establishing_controller.go:76] Starting EstablishingController | |
I0917 06:40:38.502316 6 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController | |
I0917 06:40:38.502332 6 customresource_discovery_controller.go:209] Starting DiscoveryController | |
I0917 06:40:38.502339 6 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController | |
I0917 06:40:38.502354 6 naming_controller.go:291] Starting NamingConditionController | |
I0917 06:40:38.502365 6 autoregister_controller.go:141] Starting autoregister controller | |
I0917 06:40:38.502373 6 cache.go:32] Waiting for caches to sync for autoregister controller | |
I0917 06:40:38.502392 6 apiservice_controller.go:94] Starting APIServiceRegistrationController | |
I0917 06:40:38.502402 6 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
I0917 06:40:38.502422 6 available_controller.go:387] Starting AvailableConditionController | |
I0917 06:40:38.502431 6 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
I0917 06:40:38.502453 6 crdregistration_controller.go:111] Starting crd-autoregister controller | |
I0917 06:40:38.502462 6 shared_informer.go:223] Waiting for caches to sync for crd-autoregister | |
I0917 06:40:38.502586 6 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller | |
I0917 06:40:38.502596 6 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller | |
I0917 06:40:38.502630 6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt | |
I0917 06:40:38.502658 6 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt | |
I0917 06:40:38.504143 6 controller.go:86] Starting OpenAPI controller | |
I0917 06:40:38.602425 6 cache.go:39] Caches are synced for autoregister controller | |
I0917 06:40:38.602491 6 cache.go:39] Caches are synced for AvailableConditionController controller | |
I0917 06:40:38.602502 6 shared_informer.go:230] Caches are synced for crd-autoregister | |
I0917 06:40:38.602490 6 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
I0917 06:40:38.602708 6 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller | |
E0917 06:40:38.632666 6 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time | |
E0917 06:40:38.633328 6 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.27.0.2, ResourceVersion: 0, AdditionalErrorMsg: | |
I0917 06:40:39.501241 6 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). | |
I0917 06:40:39.501269 6 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). | |
I0917 06:40:39.505461 6 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 | |
I0917 06:40:39.507985 6 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 | |
I0917 06:40:39.508006 6 storage_scheduling.go:143] all system priority classes are created successfully or already exist. | |
I0917 06:40:39.741911 6 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io | |
I0917 06:40:39.767704 6 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io | |
W0917 06:40:39.853603 6 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.27.0.2] | |
I0917 06:40:39.854495 6 controller.go:606] quota admission added evaluator for: endpoints | |
I0917 06:40:39.857226 6 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io | |
I0917 06:40:40.511877 6 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
I0917 06:40:40.511909 6 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
time="2020-09-17T06:40:40.513809407Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0" | |
time="2020-09-17T06:40:40.514238888Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true" | |
time="2020-09-17T06:40:40.519639648Z" level=info msg="Waiting for cloudcontroller rbac role to be created" | |
I0917 06:40:40.519777 6 controllermanager.go:161] Version: v1.18.6+k3s1 | |
time="2020-09-17T06:40:40.520251843Z" level=info msg="Creating CRD addons.k3s.cattle.io" | |
I0917 06:40:40.520703 6 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252 | |
time="2020-09-17T06:40:40.524432895Z" level=info msg="Creating CRD helmcharts.helm.cattle.io" | |
I0917 06:40:40.528073 6 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
I0917 06:40:40.528096 6 registry.go:150] Registering EvenPodsSpread predicate and priority function | |
W0917 06:40:40.529336 6 authorization.go:47] Authorization is disabled | |
W0917 06:40:40.529348 6 authentication.go:40] Authentication is disabled | |
I0917 06:40:40.529357 6 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 | |
time="2020-09-17T06:40:40.530014805Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available" | |
I0917 06:40:40.884730 6 plugins.go:100] No cloud provider specified. | |
I0917 06:40:40.886818 6 shared_informer.go:223] Waiting for caches to sync for tokens | |
I0917 06:40:40.891358 6 controller.go:606] quota admission added evaluator for: serviceaccounts | |
I0917 06:40:40.893404 6 controllermanager.go:533] Started "statefulset" | |
I0917 06:40:40.893426 6 stateful_set.go:146] Starting stateful set controller | |
I0917 06:40:40.893435 6 shared_informer.go:223] Waiting for caches to sync for stateful set | |
I0917 06:40:40.899931 6 controllermanager.go:533] Started "cronjob" | |
I0917 06:40:40.899964 6 cronjob_controller.go:97] Starting CronJob Manager | |
W0917 06:40:40.906172 6 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
I0917 06:40:40.906672 6 controllermanager.go:533] Started "attachdetach" | |
I0917 06:40:40.906796 6 attach_detach_controller.go:348] Starting attach detach controller | |
I0917 06:40:40.906808 6 shared_informer.go:223] Waiting for caches to sync for attach detach | |
I0917 06:40:40.913330 6 controllermanager.go:533] Started "pvc-protection" | |
W0917 06:40:40.913349 6 controllermanager.go:512] "tokencleaner" is disabled | |
I0917 06:40:40.913395 6 pvc_protection_controller.go:101] Starting PVC protection controller | |
I0917 06:40:40.913404 6 shared_informer.go:223] Waiting for caches to sync for PVC protection | |
E0917 06:40:40.919634 6 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail | |
W0917 06:40:40.919652 6 controllermanager.go:525] Skipping "service" | |
W0917 06:40:40.919664 6 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. | |
W0917 06:40:40.919671 6 controllermanager.go:525] Skipping "route" | |
I0917 06:40:40.926059 6 controllermanager.go:533] Started "endpointslice" | |
I0917 06:40:40.926077 6 endpointslice_controller.go:213] Starting endpoint slice controller | |
I0917 06:40:40.926085 6 shared_informer.go:223] Waiting for caches to sync for endpoint_slice | |
I0917 06:40:40.986933 6 shared_informer.go:230] Caches are synced for tokens | |
time="2020-09-17T06:40:41.032282767Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available" | |
time="2020-09-17T06:40:41.032309632Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available" | |
I0917 06:40:41.283307 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io | |
W0917 06:40:41.283357 6 shared_informer.go:461] resyncPeriod 70317527453913 is smaller than resyncCheckPeriod 79294995024218 and the informer has already started. Changing it to 79294995024218 | |
I0917 06:40:41.283483 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions | |
I0917 06:40:41.283512 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps | |
I0917 06:40:41.283539 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps | |
I0917 06:40:41.283603 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints | |
I0917 06:40:41.283621 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps | |
I0917 06:40:41.283649 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch | |
I0917 06:40:41.283684 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io | |
I0917 06:40:41.283719 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy | |
I0917 06:40:41.283751 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io | |
W0917 06:40:41.283768 6 shared_informer.go:461] resyncPeriod 48180005252325 is smaller than resyncCheckPeriod 79294995024218 and the informer has already started. Changing it to 79294995024218 | |
I0917 06:40:41.283838 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts | |
I0917 06:40:41.283879 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates | |
I0917 06:40:41.283915 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges | |
I0917 06:40:41.283947 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps | |
I0917 06:40:41.283974 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch | |
I0917 06:40:41.284007 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io | |
I0917 06:40:41.284032 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io | |
I0917 06:40:41.284085 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io | |
I0917 06:40:41.284133 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io | |
I0917 06:40:41.284177 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps | |
I0917 06:40:41.284213 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling | |
I0917 06:40:41.284242 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io | |
I0917 06:40:41.284273 6 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io | |
I0917 06:40:41.284291 6 controllermanager.go:533] Started "resourcequota" | |
I0917 06:40:41.284311 6 resource_quota_controller.go:272] Starting resource quota controller | |
I0917 06:40:41.284324 6 shared_informer.go:223] Waiting for caches to sync for resource quota | |
I0917 06:40:41.284346 6 resource_quota_monitor.go:303] QuotaMonitor running | |
I0917 06:40:41.291345 6 controllermanager.go:533] Started "deployment" | |
I0917 06:40:41.291437 6 deployment_controller.go:153] Starting deployment controller | |
I0917 06:40:41.291446 6 shared_informer.go:223] Waiting for caches to sync for deployment | |
I0917 06:40:41.304257 6 controllermanager.go:533] Started "horizontalpodautoscaling" | |
I0917 06:40:41.304319 6 horizontal.go:169] Starting HPA controller | |
I0917 06:40:41.304331 6 shared_informer.go:223] Waiting for caches to sync for HPA | |
I0917 06:40:41.310008 6 controllermanager.go:533] Started "csrcleaner" | |
I0917 06:40:41.310084 6 cleaner.go:82] Starting CSR cleaner controller | |
I0917 06:40:41.496850 6 controllermanager.go:533] Started "namespace" | |
I0917 06:40:41.496880 6 namespace_controller.go:200] Starting namespace controller | |
I0917 06:40:41.496890 6 shared_informer.go:223] Waiting for caches to sync for namespace | |
time="2020-09-17T06:40:41.523656955Z" level=info msg="Waiting for cloudcontroller rbac role to be created" | |
time="2020-09-17T06:40:41.534782724Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available" | |
time="2020-09-17T06:40:41.544550913Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz" | |
time="2020-09-17T06:40:41.544824434Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml" | |
time="2020-09-17T06:40:41.544928588Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml" | |
time="2020-09-17T06:40:41.545021460Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml" | |
time="2020-09-17T06:40:41.545108266Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml" | |
time="2020-09-17T06:40:41.545211514Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml" | |
time="2020-09-17T06:40:41.545313733Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml" | |
time="2020-09-17T06:40:41.545415623Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml" | |
time="2020-09-17T06:40:41.545529487Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml" | |
time="2020-09-17T06:40:41.545635960Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml" | |
time="2020-09-17T06:40:41.545755476Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml" | |
time="2020-09-17T06:40:41.545896411Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml" | |
time="2020-09-17T06:40:41.545983196Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml" | |
time="2020-09-17T06:40:41.646620744Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" | |
time="2020-09-17T06:40:41.646824596Z" level=info msg="Waiting for master node startup: resource name may not be empty" | |
time="2020-09-17T06:40:41.647331237Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token" | |
time="2020-09-17T06:40:41.647391174Z" level=info msg="To join node to cluster: k3s agent -s https://172.27.0.2:6443 -t ${NODE_TOKEN}" | |
I0917 06:40:41.652620 6 node_lifecycle_controller.go:384] Sending events to api server. | |
I0917 06:40:41.654668 6 taint_manager.go:163] Sending events to api server. | |
I0917 06:40:41.655163 6 node_lifecycle_controller.go:512] Controller will reconcile labels. | |
I0917 06:40:41.655461 6 controllermanager.go:533] Started "nodelifecycle" | |
W0917 06:40:41.655516 6 controllermanager.go:525] Skipping "root-ca-cert-publisher" | |
I0917 06:40:41.655714 6 node_lifecycle_controller.go:546] Starting node controller | |
I0917 06:40:41.655806 6 shared_informer.go:223] Waiting for caches to sync for taint | |
I0917 06:40:41.709339 6 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io | |
2020-09-17 06:40:41.719912 I | http: TLS handshake error from 127.0.0.1:54462: remote error: tls: bad certificate | |
time="2020-09-17T06:40:41.726550095Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml" | |
time="2020-09-17T06:40:41.726572991Z" level=info msg="Run: k3s kubectl" | |
time="2020-09-17T06:40:41.726582208Z" level=info msg="k3s is up and running" | |
time="2020-09-17T06:40:41.726695175Z" level=info msg="module overlay was already loaded" | |
time="2020-09-17T06:40:41.726715491Z" level=info msg="module nf_conntrack was already loaded" | |
time="2020-09-17T06:40:41.726735040Z" level=info msg="module br_netfilter was already loaded" | |
2020-09-17 06:40:41.729538 I | http: TLS handshake error from 127.0.0.1:54470: remote error: tls: bad certificate | |
2020-09-17 06:40:41.733865 I | http: TLS handshake error from 127.0.0.1:54476: remote error: tls: bad certificate | |
time="2020-09-17T06:40:41.753527640Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log" | |
time="2020-09-17T06:40:41.753640746Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd" | |
time="2020-09-17T06:40:41.753769415Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\"" | |
time="2020-09-17T06:40:41.756299515Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" | |
time="2020-09-17T06:40:41.756309578Z" level=info msg="Starting batch/v1, Kind=Job controller" | |
time="2020-09-17T06:40:41.756320262Z" level=info msg="Starting /v1, Kind=Service controller" | |
time="2020-09-17T06:40:41.756320716Z" level=info msg="Starting /v1, Kind=Endpoints controller" | |
time="2020-09-17T06:40:41.756330257Z" level=info msg="Starting /v1, Kind=Pod controller" | |
time="2020-09-17T06:40:41.756329863Z" level=info msg="Starting /v1, Kind=Node controller" | |
I0917 06:40:41.790641 6 controllermanager.go:533] Started "clusterrole-aggregation" | |
I0917 06:40:41.790698 6 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator | |
I0917 06:40:41.790707 6 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator | |
I0917 06:40:41.940132 6 controllermanager.go:533] Started "endpoint" | |
I0917 06:40:41.940154 6 endpoints_controller.go:182] Starting endpoint controller | |
I0917 06:40:41.940166 6 shared_informer.go:223] Waiting for caches to sync for endpoint | |
I0917 06:40:42.014833 6 controller.go:606] quota admission added evaluator for: deployments.apps | |
I0917 06:40:42.090144 6 controllermanager.go:533] Started "replicationcontroller" | |
I0917 06:40:42.090193 6 replica_set.go:181] Starting replicationcontroller controller | |
I0917 06:40:42.090200 6 shared_informer.go:223] Waiting for caches to sync for ReplicationController | |
I0917 06:40:42.141438 6 controllermanager.go:533] Started "csrsigning" | |
I0917 06:40:42.141518 6 certificate_controller.go:119] Starting certificate controller "csrsigning" | |
I0917 06:40:42.141533 6 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning | |
I0917 06:40:42.141559 6 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key | |
I0917 06:40:42.290362 6 controllermanager.go:533] Started "ttl" | |
I0917 06:40:42.290401 6 ttl_controller.go:118] Starting TTL controller | |
I0917 06:40:42.290407 6 shared_informer.go:223] Waiting for caches to sync for TTL | |
I0917 06:40:42.432330 6 request.go:621] Throttling request took 1.049231699s, request: GET:https://127.0.0.1:6444/apis/autoscaling/v2beta1?timeout=32s | |
I0917 06:40:42.441072 6 controllermanager.go:533] Started "persistentvolume-expander" | |
I0917 06:40:42.441117 6 expand_controller.go:319] Starting expand controller | |
I0917 06:40:42.441127 6 shared_informer.go:223] Waiting for caches to sync for expand | |
time="2020-09-17T06:40:42.445783357Z" level=info msg="Starting /v1, Kind=Secret controller" | |
time="2020-09-17T06:40:42.449724950Z" level=info msg="Active TLS secret k3s-serving (ver=223) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.27.0.2:172.27.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:975a58fa049171a3cd99dc91a2a922ec83aeee121b28cf3ca23262809bdf0215]" | |
time="2020-09-17T06:40:42.531113699Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0" | |
Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances. | |
I0917 06:40:42.542526 6 controllermanager.go:120] Version: v1.18.6+k3s1 | |
W0917 06:40:42.542558 6 controllermanager.go:132] detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues | |
I0917 06:40:42.548279 6 node_controller.go:110] Sending events to api server. | |
I0917 06:40:42.548369 6 controllermanager.go:247] Started "cloud-node" | |
I0917 06:40:42.550631 6 node_lifecycle_controller.go:78] Sending events to api server | |
I0917 06:40:42.550669 6 controllermanager.go:247] Started "cloud-node-lifecycle" | |
E0917 06:40:42.552125 6 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers | |
W0917 06:40:42.552137 6 controllermanager.go:244] Skipping "service" | |
W0917 06:40:42.552146 6 core.go:108] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes. | |
W0917 06:40:42.552153 6 controllermanager.go:244] Skipping "route" | |
I0917 06:40:42.592950 6 controllermanager.go:533] Started "job" | |
I0917 06:40:42.593015 6 job_controller.go:144] Starting job controller | |
I0917 06:40:42.593027 6 shared_informer.go:223] Waiting for caches to sync for job | |
time="2020-09-17T06:40:42.650184140Z" level=info msg="Waiting for master node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found" | |
I0917 06:40:42.742295 6 controllermanager.go:533] Started "podgc" | |
I0917 06:40:42.742339 6 gc_controller.go:89] Starting GC controller | |
I0917 06:40:42.742346 6 shared_informer.go:223] Waiting for caches to sync for GC | |
time="2020-09-17T06:40:42.765907901Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us" | |
time="2020-09-17T06:40:42.766918386Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd/system.slice --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key" | |
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. | |
time="2020-09-17T06:40:42.767060789Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables" | |
W0917 06:40:42.767251 6 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. | |
I0917 06:40:42.767928 6 server.go:413] Version: v1.18.6+k3s1 | |
I0917 06:40:42.773519 6 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt | |
W0917 06:40:42.774481 6 fs.go:206] stat failed on /dev/mapper/system with error: no such file or directory | |
time="2020-09-17T06:40:42.775938066Z" level=info msg="waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found" | |
W0917 06:40:42.777806 6 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id" | |
I0917 06:40:42.778322 6 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
I0917 06:40:42.778926 6 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: [] | |
I0917 06:40:42.778955 6 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} | |
I0917 06:40:42.779122 6 topology_manager.go:126] [topologymanager] Creating topology manager with none policy | |
I0917 06:40:42.779146 6 container_manager_linux.go:312] [topologymanager] Initializing Topology Manager with none policy | |
I0917 06:40:42.779172 6 container_manager_linux.go:317] Creating device plugin manager: true | |
W0917 06:40:42.779404 6 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". | |
W0917 06:40:42.779530 6 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock". | |
I0917 06:40:42.779607 6 kubelet.go:317] Watching apiserver | |
W0917 06:40:42.814092 6 proxier.go:625] Failed to read file /lib/modules/5.8.9-arch2-1/modules.builtin with error open /lib/modules/5.8.9-arch2-1/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
I0917 06:40:42.814226 6 kuberuntime_manager.go:211] Container runtime containerd initialized, version: v1.3.3-k3s2, apiVersion: v1alpha2 | |
I0917 06:40:42.814661 6 server.go:1124] Started kubelet | |
I0917 06:40:42.814730 6 server.go:145] Starting to listen on 0.0.0.0:10250 | |
W0917 06:40:42.815418 6 fs.go:540] stat failed on /dev/mapper/system with error: no such file or directory | |
E0917 06:40:42.815439 6 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": failed to get device for dir "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": could not find device with major: 0, minor: 29 in cached partitions map. | |
E0917 06:40:42.815451 6 kubelet.go:1306] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem | |
I0917 06:40:42.815597 6 server.go:393] Adding debug handlers to kubelet server. | |
I0917 06:40:42.816079 6 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
I0917 06:40:42.816229 6 volume_manager.go:265] Starting Kubelet Volume Manager | |
I0917 06:40:42.816565 6 desired_state_of_world_populator.go:139] Desired state populator starts to run | |
E0917 06:40:42.822233 6 controller.go:228] failed to get node "k3d-k3s-default-server-0" when trying to set owner ref to the node lease: nodes "k3d-k3s-default-server-0" not found | |
I0917 06:40:42.827759 6 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach | |
I0917 06:40:42.828808 6 cpu_manager.go:184] [cpumanager] starting with none policy | |
I0917 06:40:42.828820 6 cpu_manager.go:185] [cpumanager] reconciling every 10s | |
I0917 06:40:42.828837 6 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
I0917 06:40:42.892368 6 controllermanager.go:533] Started "serviceaccount" | |
I0917 06:40:42.892438 6 serviceaccounts_controller.go:117] Starting service account controller | |
I0917 06:40:42.892463 6 shared_informer.go:223] Waiting for caches to sync for service account | |
I0917 06:40:42.901587 6 status_manager.go:158] Starting to sync pod status with apiserver | |
I0917 06:40:42.901613 6 kubelet.go:1822] Starting kubelet main sync loop. | |
E0917 06:40:42.901740 6 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] | |
I0917 06:40:42.916381 6 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach | |
E0917 06:40:42.916428 6 kubelet.go:2268] node "k3d-k3s-default-server-0" not found | |
I0917 06:40:42.917467 6 kubelet_node_status.go:70] Attempting to register node k3d-k3s-default-server-0 | |
W0917 06:40:42.919821 6 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W0917 06:40:42.920288 6 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W0917 06:40:42.920570 6 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W0917 06:40:42.920855 6 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
W0917 06:40:42.921119 6 proxier.go:635] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules | |
E0917 06:40:42.929477 6 node.go:125] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" not found | |
I0917 06:40:42.940272 6 controllermanager.go:533] Started "csrapproving" | |
I0917 06:40:42.940293 6 certificate_controller.go:119] Starting certificate controller "csrapproving" | |
I0917 06:40:42.940302 6 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving | |
E0917 06:40:43.001906 6 kubelet.go:1846] skipping pod synchronization - container runtime status check may not have completed yet | |
E0917 06:40:43.016529 6 kubelet.go:2268] node "k3d-k3s-default-server-0" not found | |
I0917 06:40:43.064642 6 policy_none.go:43] [cpumanager] none policy: Start | |
W0917 06:40:43.064693 6 fs.go:540] stat failed on /dev/mapper/system with error: no such file or directory | |
F0917 06:40:43.064712 6 kubelet.go:1384] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 29 in cached partitions map |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment