Created
October 23, 2021 06:31
-
-
Save patsevanton/9efd2e5fc31e565e1b0e07af0913c29a to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
root@controlplane ~$ | |
root@controlplane ~$ kubectl get nodes | |
NAME STATUS ROLES AGE VERSION | |
controlplane Ready control-plane,master 4m41s v1.20.2 | |
node01 Ready <none> 2m56s v1.20.2 | |
root@controlplane ~$ kubectl describe nodes controlplane | grep -i taint | |
Taints: <none> | |
root@controlplane ~$ kubectl get deploy | |
NAME READY UP-TO-DATE AVAILABLE AGE | |
blue 5/5 5 5 117s | |
root@controlplane ~$ kubectl get pod -o wide | |
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES | |
blue-746c87566d-fsxmq 1/1 Running 0 2m9s 10.244.1.3 node01 <none> <none> | |
blue-746c87566d-g8rgf 1/1 Running 0 2m9s 10.244.1.5 node01 <none> <none> | |
blue-746c87566d-m47sp 1/1 Running 0 2m10s 10.244.1.6 node01 <none> <none> | |
blue-746c87566d-mjcz8 1/1 Running 0 2m10s 10.244.1.2 node01 <none> <none> | |
blue-746c87566d-mqnds 1/1 Running 0 2m10s 10.244.1.7 node01 <none> <none> | |
simple-webapp-1 1/1 Running 0 2m11s 10.244.1.4 node01 <none> <none> | |
root@controlplane ~$ kubeadm upgrade plan | |
[upgrade/config] Making sure the configuration is correct: | |
[upgrade/config] Reading configuration from the cluster... | |
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' | |
[preflight] Running pre-flight checks. | |
[upgrade] Running cluster health checks | |
[upgrade] Fetching available versions to upgrade to | |
[upgrade/versions] Cluster version: v1.20.2 | |
[upgrade/versions] kubeadm version: v1.20.2 | |
I1023 06:15:25.625831 14513 version.go:251] remote version is much newer: v1.22.2; falling back to: stable-1.20 | |
[upgrade/versions] Latest stable version: v1.20.11 | |
[upgrade/versions] Latest stable version: v1.20.11 | |
[upgrade/versions] Latest version in the v1.20 series: v1.20.11 | |
[upgrade/versions] Latest version in the v1.20 series: v1.20.11 | |
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': | |
COMPONENT CURRENT AVAILABLE | |
kubelet 2 x v1.20.2 v1.20.11 | |
Upgrade to the latest version in the v1.20 series: | |
COMPONENT CURRENT AVAILABLE | |
kube-apiserver v1.20.2 v1.20.11 | |
kube-controller-manager v1.20.2 v1.20.11 | |
kube-scheduler v1.20.2 v1.20.11 | |
kube-proxy v1.20.2 v1.20.11 | |
CoreDNS 1.7.0 1.7.0 | |
etcd 3.4.13-0 3.4.13-0 | |
You can now apply the upgrade by executing the following command: | |
kubeadm upgrade apply v1.20.11 | |
Note: Before you can perform this upgrade, you have to update kubeadm to v1.20.11. | |
_____________________________________________________________________ | |
The table below shows the current state of component configs as understood by this version of kubeadm. | |
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or | |
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually | |
upgrade to is denoted in the "PREFERRED VERSION" column. | |
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED | |
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no | |
kubelet.config.k8s.io v1beta1 v1beta1 no | |
_____________________________________________________________________ | |
root@controlplane ~$ kubectl drain controlplane --ignore-daemonsets | |
node/controlplane cordoned | |
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-7wckv, kube-system/kube-proxy-sdv9g | |
evicting pod kube-system/coredns-74ff55c5b-s5nx4 | |
evicting pod kube-system/coredns-74ff55c5b-kgtn5 | |
pod/coredns-74ff55c5b-kgtn5 evicted | |
pod/coredns-74ff55c5b-s5nx4 evicted | |
node/controlplane evicted | |
root@controlplane ~$ apt update | |
Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB] | |
Get:2 https://download.docker.com/linux/ubuntu focal InRelease [57.7 kB] | |
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB] | |
Get:5 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB] | |
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB] | |
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9,383 B] | |
Get:7 https://download.docker.com/linux/ubuntu focal/stable amd64 Packages [12.5 kB] | |
Get:8 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB] | |
Get:9 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1,275 kB] | |
Get:10 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB] | |
Get:11 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB] | |
Get:12 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1,085 kB] | |
Get:13 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [678 kB] | |
Get:14 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [33.4 kB] | |
Get:15 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [1,626 kB] | |
Get:16 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [6,310 B] | |
Get:17 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [2,668 B] | |
Get:18 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [801 kB] | |
Get:19 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [1,179 kB] | |
Get:20 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [626 kB] | |
Get:21 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [30.1 kB] | |
Get:22 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [50.0 kB] | |
Fetched 19.6 MB in 11s (1,852 kB/s) | |
Reading package lists... Done | |
Building dependency tree | |
Reading state information... Done | |
56 packages can be upgraded. Run 'apt list --upgradable' to see them. | |
root@controlplane ~$ apt install kubeadm=1.21.0-00 | |
Reading package lists... Done | |
Building dependency tree | |
Reading state information... Done | |
The following packages will be upgraded: | |
kubeadm | |
1 upgraded, 0 newly installed, 0 to remove and 55 not upgraded. | |
Need to get 8,544 kB of archives. | |
After this operation, 5,407 kB of additional disk space will be used. | |
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.21.0-00 [8,544 kB] | |
Fetched 8,544 kB in 1s (9,119 kB/s) | |
debconf: delaying package configuration, since apt-utils is not installed | |
(Reading database ... 14964 files and directories currently installed.) | |
Preparing to unpack .../kubeadm_1.21.0-00_amd64.deb ... | |
Unpacking kubeadm (1.21.0-00) over (1.20.2-00) ... | |
Setting up kubeadm (1.21.0-00) ... | |
root@controlplane ~$ kubeadm upgrade apply v1.21.0 | |
[upgrade/config] Making sure the configuration is correct: | |
[upgrade/config] Reading configuration from the cluster... | |
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' | |
[preflight] Running pre-flight checks. | |
[upgrade] Running cluster health checks | |
[upgrade/version] You have chosen to change the cluster version to "v1.21.0" | |
[upgrade/versions] Cluster version: v1.20.2 | |
[upgrade/versions] kubeadm version: v1.21.0 | |
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y | |
won't proceed; the user didn't answer (Y|y) in order to continue | |
To see the stack trace of this error execute with --v=5 or higher | |
root@controlplane ~$ kubeadm upgrade apply v1.21.0 | |
[upgrade/config] Making sure the configuration is correct: | |
[upgrade/config] Reading configuration from the cluster... | |
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' | |
[preflight] Running pre-flight checks. | |
[upgrade] Running cluster health checks | |
[upgrade/version] You have chosen to change the cluster version to "v1.21.0" | |
[upgrade/versions] Cluster version: v1.20.2 | |
[upgrade/versions] kubeadm version: v1.21.0 | |
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y | |
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster | |
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection | |
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' | |
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.0"... | |
Static pod: kube-apiserver-controlplane hash: 3debc1ae911ee54d2981bc21a4db47c2 | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-scheduler-controlplane hash: 69cd289b4ed80ced4f95a59ff60fa102 | |
[upgrade/etcd] Upgrading to TLS for etcd | |
Static pod: etcd-controlplane hash: 32d9994597541480477b3d95c4e76027 | |
[upgrade/staticpods] Preparing for "etcd" upgrade | |
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade | |
[upgrade/etcd] Waiting for etcd to become available | |
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests755007027" | |
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade | |
[upgrade/staticpods] Renewing apiserver certificate | |
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate | |
[upgrade/staticpods] Renewing front-proxy-client certificate | |
[upgrade/staticpods] Renewing apiserver-etcd-client certificate | |
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-10-23-06-24-06/kube-apiserver.yaml" | |
[upgrade/staticpods] Waiting for the kubelet to restart the component | |
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) | |
Static pod: kube-apiserver-controlplane hash: 3debc1ae911ee54d2981bc21a4db47c2 | |
Static pod: kube-apiserver-controlplane hash: 3debc1ae911ee54d2981bc21a4db47c2 | |
Static pod: kube-apiserver-controlplane hash: 3debc1ae911ee54d2981bc21a4db47c2 | |
Static pod: kube-apiserver-controlplane hash: 8b7909598414cf909c448484d62f153d | |
[apiclient] Found 1 Pods for label selector component=kube-apiserver | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": dial tcp 172.18.0.3:6443: connect: connection refused] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": net/http: TLS handshake timeout] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": context deadline exceeded (Client.Timeout exceeded while awaiting headers)] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": net/http: request canceled (Client.Timeout exceeded while awaiting headers)] | |
[apiclient] Error getting Pods with label selector "component=kube-apiserver" [Get "https://controlplane:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver": net/http: request canceled (Client.Timeout exceeded while awaiting headers)] | |
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully! | |
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade | |
[upgrade/staticpods] Renewing controller-manager.conf certificate | |
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-10-23-06-24-06/kube-controller-manager.yaml" | |
[upgrade/staticpods] Waiting for the kubelet to restart the component | |
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3456cf17d1057cfffaa60b9ccb6eaf2d | |
Static pod: kube-controller-manager-controlplane hash: 3b1bf47c1fd81695c8b57c8a65959e65 | |
[apiclient] Found 1 Pods for label selector component=kube-controller-manager | |
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! | |
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade | |
[upgrade/staticpods] Renewing scheduler.conf certificate | |
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-10-23-06-24-06/kube-scheduler.yaml" | |
[upgrade/staticpods] Waiting for the kubelet to restart the component | |
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) | |
Static pod: kube-scheduler-controlplane hash: 69cd289b4ed80ced4f95a59ff60fa102 | |
Static pod: kube-scheduler-controlplane hash: 69cd289b4ed80ced4f95a59ff60fa102 | |
Static pod: kube-scheduler-controlplane hash: 98c4dbc724c870519b6f3d945a54b5d4 | |
[apiclient] Found 1 Pods for label selector component=kube-scheduler | |
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully! | |
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated) | |
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes | |
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace | |
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster | |
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" | |
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes | |
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials | |
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token | |
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster | |
[addons] Applied essential addon: kube-proxy | |
[upgrade/postupgrade] FATAL post-upgrade error: couldn't retrieve DNS addon deployments: Get "https://controlplane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
To see the stack trace of this error execute with --v=5 or higher | |
root@controlplane ~$ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment