Last active
March 13, 2023 04:42
-
-
Save bikram20/64a9da19b12b0d1e2b5dcb7916d37854 to your computer and use it in GitHub Desktop.
Velero-demo-kubernetes
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This is an e2e demonstration of getting Velero to do a backup/restore in DigitalOcean kubernetes. This will not work on other providers because we install Digitalocean specific provider for velero. Likewise, we require Spaces as the backup destination in the first release. Backblaze may work, if it is similar to Spaces and no complex permissions. | |
In v1, we will only support DO Kubernetes, with destinations being Spaces and DO Volumes. | |
Commands needed: kubectl, velero | |
Credentials needed: S3/Spaces (for velero to save backups), DO Cloud API (for velero to take volume snapshots from k8s), Kubeconfig (for velero to access k8s cluster) | |
This diagram shows how velero works: https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/05-setup-backup-restore/assets/images/velero_bk_res_wf.png | |
- Install a DOKS (DigitalOcean Kubernetes) cluster. | |
Follow the steps through the UI - https://cloud.digitalocean.com/kubernetes. It is fairly straightforward. Make sure to check your | |
- Verify connectivity to your cluster. You should have kubectl installed in your client machine (laptop). | |
root@velero-client:~# kubectl version | |
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. | |
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.2", GitCommit:"fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b", GitTreeState:"clean", BuildDate:"2023-03-01T02:22:36Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"} | |
Kustomize Version: v4.5.7 | |
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:29:58Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"} | |
root@velero-client:~# kubectl get node | |
NAME STATUS ROLES AGE VERSION | |
pool-drog3ou6c-qlqoc Ready <none> 62m v1.25.4 | |
pool-drog3ou6c-qlqom Ready <none> 62m v1.25.4 | |
root@velero-client:~# | |
- Install velero CLI on the client. (brew install velero - on mac, for linux see below) | |
https://velero.io/docs/v1.10/basic-install/ | |
wget https://github.com/vmware-tanzu/velero/releases/download/v1.10.2/velero-v1.10.2-linux-amd64.tar.gz | |
tar -xvf velero-v1.10.2-linux-amd64.tar.gz | |
sudo mv velero-v1.10.2-linux-amd64/velero /usr/local/bin | |
chmod +x /usr/local/bin/velero | |
root@velero-client:~# velero version | |
Client: | |
Version: v1.10.2 | |
Git commit: 7416504e3a8fea40f78bbc4cfefc7c642aafc812 | |
<error getting server version: no matches for kind "ServerStatusRequest" in version "velero.io/v1"> | |
root@velero-client:~# | |
- Now we need to install velero on the server (kubernetes cluster). For that, we will use velero install command. Important: Velero command uses the same kubeconfig file (~/.kube/config) that kubectl uses. | |
Remember that velero needs Spaces credentials to configure Spaces as the backup destination. Also it needs DO API credentials for volume snapshots. We will install 2 plugins (AWS plugins for S3/Spaces, and DO plugin for volumes). We will follow the steps here: | |
https://github.com/digitalocean/velero-plugin | |
After basic install, we will customize velero to access volumes. | |
Remember, we are using provider aws because S3 and Spaces are compatible. | |
root@velero-client:~/velero# velero install --provider velero.io/aws --bucket velero-bg --plugins velero/velero-plugin-for-aws:v1.6.0,digitalocean/velero-plugin:v1.1.0 --backup-location-config s3Url=https://fra1.digitaloceanspaces.com,region=fra1 --use-volume-snapshots=false --secret-file=./spaces-credentials | |
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status. | |
root@velero-client:~/velero# kubectl get pod -n velero | |
NAME READY STATUS RESTARTS AGE | |
velero-6d4d99f4db-jdhl5 1/1 Running 0 2m12s | |
root@velero-client:~/velero# kubectl logs deployment/velero -n velero | tail -5 | |
Defaulted container "velero" out of: velero, velero-velero-plugin-for-aws (init), digitalocean-velero-plugin (init) | |
time="2023-03-13T03:41:50Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:151" | |
time="2023-03-13T03:41:51Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:136" | |
time="2023-03-13T03:41:51Z" level=error msg="Current BackupStorageLocations available/unavailable/unknown: 0/0/1)" controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:193" | |
time="2023-03-13T03:43:00Z" level=info msg="Validating BackupStorageLocation" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:151" | |
time="2023-03-13T03:43:01Z" level=info msg="BackupStorageLocations is valid, marking as available" backup-storage-location=velero/default controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:136" | |
root@velero-client:~/velero# | |
- So now we have velero installed. We need to configure the volume for volume snapshots. The reason why we need to take volume snapshots from inside k8s, and not from outside is because volumes are a kubernetes object (persistent volume). You cannot restore any snapshot into a kubernetes volume. | |
root@velero-client:~/velero# velero snapshot-location create default --provider digitalocean.com/velero | |
Snapshot volume location "default" configured successfully. | |
root@velero-client:~/velero# kubectl patch secret cloud-credentials -p "$(cat 01-velero-secret.patch.yaml)" --namespace velero | |
cat: 01-velero-secret.patch.yaml: No such file or directory | |
error: must specify --patch or --patch-file containing the contents of the patch | |
root@velero-client:~/velero# kubectl patch secret cloud-credentials -p "$(cat 01-velero-secret-patch.yaml)" --namespace velero | |
secret/cloud-credentials patched | |
root@velero-client:~/velero# kubectl patch deployment velero -p "$(cat 02-velero-deployment-patch.yaml)" --namespace velero | |
deployment.apps/velero patched | |
root@velero-client:~/velero# | |
Now we can verify that volume snapshots are configured. | |
root@velero-client:~/velero# kubectl get deploy velero -n velero -oyaml | grep -i digitalocean_token | |
- name: DIGITALOCEAN_TOKEN | |
key: digitalocean_token | |
root@velero-client:~/velero# kubectl get secrets -n velero | |
NAME TYPE DATA AGE | |
cloud-credentials Opaque 2 7m15s | |
velero-repo-credentials Opaque 1 6m28s | |
root@velero-client:~/velero# kubectl get secrets cloud-credentials -n velero -oyaml | |
- Now, we have everything setup and are ready to try a backup/restore. Let us create an NGINX resource. We will back it up, delete it and then restore. | |
kubectl apply -f https://raw.githubusercontent.com/digitalocean/velero-plugin/main/examples/nginx-example.yaml | |
Verify it is up. You need to give 5-10 min for load balancer to come up. | |
root@velero-client:~/velero# kubectl get ns | |
NAME STATUS AGE | |
default Active 93m | |
kube-node-lease Active 93m | |
kube-public Active 93m | |
kube-system Active 93m | |
nginx-example Active 70s | |
velero Active 10m | |
root@velero-client:~/velero# kubectl get po -n nginx-example | |
NAME READY STATUS RESTARTS AGE | |
nginx-deploy-55b67d74f6-s655q 1/1 Running 0 81s | |
root@velero-client:~/velero# kubectl get svc -n nginx-example | |
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | |
nginx-svc LoadBalancer 10.245.173.53 <pending> 80:32203/TCP 88s | |
root@velero-client:~/velero# | |
root@velero-client:~/velero# kubectl get svc -n nginx-example | |
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | |
nginx-svc LoadBalancer 10.245.173.53 206.189.243.145 80:32203/TCP 3m54s | |
root@velero-client:~/velero# | |
root@velero-client:~/velero# doctl compute volume list | |
ID Name Size Region Filesystem Type Filesystem Label Droplet IDs Tags | |
2cb470cb-c152-11ed-b6eb-0a58ac148371 pvc-a7fbec3a-3886-4dda-a11f-7fe791d9d04b 5 GiB ams3 ext4 [345167723] k8s:0e843c9c-85fa-422b-b4cf-2443951df81d | |
root@velero-client:~/velero# | |
- Now we need to take a backup. | |
velero backup create nginx-backup --include-namespaces nginx-example --csi-snapshot-timeout=20m | |
velero backup describe nginx-backup - to check the backup | |
doctl compute snapshot list - To verify the volume snapshot | |
s3cmd la --recursive s3://velero-bg - To verify the upload to spaces | |
- Now we can delete the NGINX and recreate using velero | |
root@velero-client:~/velero# kubectl delete namespaces nginx-example | |
namespace "nginx-example" deleted | |
root@velero-client:~/velero# kubectl get ns | |
NAME STATUS AGE | |
default Active 115m | |
kube-node-lease Active 115m | |
kube-public Active 115m | |
kube-system Active 115m | |
velero Active 32m | |
root@velero-client:~/velero# doctl compute volume list | |
ID Name Size Region Filesystem Type Filesystem Label Droplet IDs Tags | |
root@velero-client:~/velero# | |
- Let us restore using velero | |
root@velero-client:~/velero# velero restore create --from-backup nginx-backup | |
Restore request "nginx-backup-20230313041434" submitted successfully. | |
Run `velero restore describe nginx-backup-20230313041434` or `velero restore logs nginx-backup-20230313041434` for more details. | |
root@velero-client:~/velero# | |
root@velero-client:~/velero# kubectl get ns | |
NAME STATUS AGE | |
default Active 117m | |
kube-node-lease Active 117m | |
kube-public Active 117m | |
kube-system Active 117m | |
nginx-example Active 31s | |
velero Active 34m | |
root@velero-client:~/velero# kubectl get pod nginx-example | |
Error from server (NotFound): pods "nginx-example" not found | |
root@velero-client:~/velero# kubectl get pod -n nginx-example | |
NAME READY STATUS RESTARTS AGE | |
nginx-deploy-55b67d74f6-s655q 1/1 Running 0 48s | |
root@velero-client:~/velero# kubectl get svc -n nginx-example | |
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | |
nginx-svc LoadBalancer 10.245.153.96 <pending> 80:31682/TCP 58s | |
root@velero-client:~/velero# doctl compute volume list | |
ID Name Size Region Filesystem Type Filesystem Label Droplet IDs Tags | |
96567407-c155-11ed-b6eb-0a58ac148371 restore-a8c8e5c9-fc21-4f40-8c21-145802a73c8f 5 GiB ams3 ext4 [345167723] k8s:0e843c9c-85fa-422b-b4cf-2443951df81d | |
root@velero-client:~/velero# kubectl get pvc -n nginx-example | |
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE | |
nginx-logs Bound pvc-a7fbec3a-3886-4dda-a11f-7fe791d9d04b 5Gi RWO do-block-storage 93s | |
root@velero-client:~/velero# | |
So everything is created back again. Here we demonstrated backup and restore of a particular namespace(s) at a time. | |
References: | |
- https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/05-setup-backup-restore/velero.md | |
- https://github.com/vmware-tanzu/velero | |
- https://github.com/digitalocean/velero-plugin | |
- https://velero.io/docs/v1.10/examples/ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment