-
Assume you have 2 Rancher instances with the same Rancher version and the same underlying Kubernetes Infrastructur
-
Install rancher-backup on both Rancher instances
-
Perform a backup on the source instance. Best way to use S3 storage for easy migration. For this create a Secret with the S3 credentials
-
Restore the backup on the destination instance. EXTRA WARNINGS:
- This step will import all clusters, nodes, users, tokens.
- Rancher settings will be overwritten from the source instance, incl. admin user, 3th party auth like Keycloak, UI layout
- If restore task isn't completed, it will retry in a loop. You may observe restart loops of Rancher PODs which is caused by the restore process. Look at the
cattle-resource-system
namespace in rancher-backup POD logs. In any case the restore should stopped or withkubectl delete restore
deleted - Issues with Rancher 2.6.4 caused inconsistent CRD,e.g listenconfigs.management.cattle.io will blame. You can fix CRDS manually on the target instance, looks like
ref: rancher/backup-restore-operator#186
versions: - name: v3 served: true storage: true schema: openAPIV3Schema: x-kubernetes-preserve-unknown-fields: true
-
In the destination instance change the Server-Url in Global Settings Rancher UI to the new Url of the destination instance
-
Re-create SSL Certificate for Rancher Ingress
-
Create a Bearer Token for the downstream cluster, get cluster-id of the downstream cluster
-
ssh to control nodes on downstream cluster, generate a local KUBECONFIG file:
docker run --rm --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro --entrypoint bash $(docker inspect $(docker images -q --filter=label=io.cattle.agent=true) --format='{{index .RepoTags 0}}' | tail -1) -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"' > kubeconfig_admin.yaml
- Prepare agent config based on the step 7
# Rancher URL
RANCHERURL="https://k3s.otc.de"
# Cluster ID
CLUSTERID="c-xxxxx"
# Token
TOKEN="token-xxxx"
# Valid certificates
curl -s -H "Authorization: Bearer ${TOKEN}" "${RANCHERURL}/v3/clusterregistrationtokens?clusterId=${CLUSTERID}" | jq -r '.data[] | select(.name != "system") | .command'
# Self signed certificates
curl -s -k -H "Authorization: Bearer ${TOKEN}" "${RANCHERURL}/v3/clusterregistrationtokens?clusterId=${CLUSTERID}" | jq -r '.data[] | select(.name != "system") | .insecureCommand'
- Start registration agent
docker run --rm --net=host -v $PWD/kubeconfig_admin.yaml:/root/.kube/config --entrypoint bash $(docker inspect $(docker images -q --filter=label=io.cattle.agent=true) --format='{{index .RepoTags 0}}' | tail -1) -c 'curl --insecure -sfL https://k3s.de/v3/import/mbv8kk4ttjm9g8zxxlwdgl8khhhmvkkxxwwf6ddhk8mqtjlwmt79l_xxxxx.yaml | kubectl apply -f -'
- Restart node/cluster agents
kubectl -n cattle-system rollout restart deployment cattle-cluster-agent
kubectl -n cattle-system rollout restart daemonset cattle-node-agent
- Check agent logs and review target Rancher instance if the cluster is in state available
- Inform customer, all credential needs changed/renewed due the new server Url
References:
- https://rancher.com/docs/rancher/v2.6/en/backups/migrating-rancher/
- https://rancher.com/docs/rancher/v2.5/en/installation/resources/update-ca-cert/#method-3-recreate-rancher-agents
- https://gist.github.com/superseb/076f20146e012f1d4e289f5bd1bd4971
- https://gist.github.com/superseb/b14ed3b5535f621ad3d2aa6a4cd6443b
- rancher/backup-restore-operator#186