Skip to content

Instantly share code, notes, and snippets.

@eumel8
Last active May 3, 2022 09:22
Show Gist options
  • Save eumel8/6b9ec19cd2a6f01d967883f02b2a2e97 to your computer and use it in GitHub Desktop.
Save eumel8/6b9ec19cd2a6f01d967883f02b2a2e97 to your computer and use it in GitHub Desktop.
Howto migrate RKE downstream cluster from one seed cluster to another with another URL

Howto migrate RKE downstream cluster from one seed cluster to another with another URL

  1. Assume you have 2 Rancher instances with the same Rancher version and the same underlying Kubernetes Infrastructur

  2. Install rancher-backup on both Rancher instances

  3. Perform a backup on the source instance. Best way to use S3 storage for easy migration. For this create a Secret with the S3 credentials

  4. Restore the backup on the destination instance. EXTRA WARNINGS:

    • This step will import all clusters, nodes, users, tokens.
    • Rancher settings will be overwritten from the source instance, incl. admin user, 3th party auth like Keycloak, UI layout
    • If restore task isn't completed, it will retry in a loop. You may observe restart loops of Rancher PODs which is caused by the restore process. Look at the cattle-resource-system namespace in rancher-backup POD logs. In any case the restore should stopped or with kubectl delete restore deleted
    • Issues with Rancher 2.6.4 caused inconsistent CRD,e.g listenconfigs.management.cattle.io will blame. You can fix CRDS manually on the target instance, looks like
        versions:
       - name: v3
         served: true
         storage: true
         schema:
           openAPIV3Schema:
             x-kubernetes-preserve-unknown-fields: true
      ref: rancher/backup-restore-operator#186
  5. In the destination instance change the Server-Url in Global Settings Rancher UI to the new Url of the destination instance

  6. Re-create SSL Certificate for Rancher Ingress

  7. Create a Bearer Token for the downstream cluster, get cluster-id of the downstream cluster

  8. ssh to control nodes on downstream cluster, generate a local KUBECONFIG file:

docker run --rm --net=host -v $(docker inspect kubelet --format '{{ range .Mounts }}{{ if eq .Destination "/etc/kubernetes" }}{{ .Source }}{{ end }}{{ end }}')/ssl:/etc/kubernetes/ssl:ro --entrypoint bash $(docker inspect $(docker images -q --filter=label=io.cattle.agent=true) --format='{{index .RepoTags 0}}' | tail -1) -c 'kubectl --kubeconfig /etc/kubernetes/ssl/kubecfg-kube-node.yaml get configmap -n kube-system full-cluster-state -o json | jq -r .data.\"full-cluster-state\" | jq -r .currentState.certificatesBundle.\"kube-admin\".config | sed -e "/^[[:space:]]*server:/ s_:.*_: \"https://127.0.0.1:6443\"_"' > kubeconfig_admin.yaml
  1. Prepare agent config based on the step 7
# Rancher URL
RANCHERURL="https://k3s.otc.de"
# Cluster ID
CLUSTERID="c-xxxxx"
# Token
TOKEN="token-xxxx"
# Valid certificates
curl -s -H "Authorization: Bearer ${TOKEN}" "${RANCHERURL}/v3/clusterregistrationtokens?clusterId=${CLUSTERID}" | jq -r '.data[] | select(.name != "system") | .command'
# Self signed certificates
curl -s -k -H "Authorization: Bearer ${TOKEN}" "${RANCHERURL}/v3/clusterregistrationtokens?clusterId=${CLUSTERID}" | jq -r '.data[] | select(.name != "system") | .insecureCommand'
  1. Start registration agent
docker run --rm --net=host -v $PWD/kubeconfig_admin.yaml:/root/.kube/config --entrypoint bash $(docker inspect $(docker images -q --filter=label=io.cattle.agent=true) --format='{{index .RepoTags 0}}' | tail -1) -c 'curl --insecure -sfL https://k3s.de/v3/import/mbv8kk4ttjm9g8zxxlwdgl8khhhmvkkxxwwf6ddhk8mqtjlwmt79l_xxxxx.yaml | kubectl apply -f -'
  1. Restart node/cluster agents
kubectl -n cattle-system rollout restart deployment cattle-cluster-agent
kubectl -n cattle-system rollout restart  daemonset cattle-node-agent
  1. Check agent logs and review target Rancher instance if the cluster is in state available
  2. Inform customer, all credential needs changed/renewed due the new server Url

References:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment