Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active May 7, 2025 10:39
Show Gist options
  • Save scyto/1b526c38b9c7f7dca58ca71052653820 to your computer and use it in GitHub Desktop.
Save scyto/1b526c38b9c7f7dca58ca71052653820 to your computer and use it in GitHub Desktop.
Hypervisor Host Based CephFS pass through with VirtioFS

Using VirtioFS backed by CephFS for bind mounts

This is currently a work-in-progress documentation - rough notes for me, maybe missing a lot or wrong

The idea is to replace GlusterFS running inside the VM with storage on my cephfs cluster. This is my proxmox cluster and it runs both the storage and is the hypervisor for my docker VMs.

Other possible approaches:

  • ceph fuse clien in VM to mount cephFS or CephRBD over IP
  • use of ceph docker volume plugin (no useable version of this yet exists but it is being worked on)

Assumptions:

  • I already have a working Ceph Cluster - this will not be documented in this gist. See my proxmox gist for a working example.
  • this is for proxmox as a hypervisor+ceph cluster and the VMs are hosted on the same proxmox that is the ceph cluster

Workflow

Create a new cephFS on the proxmox cluster

I created one called docker

image

The storage ID is docker-cephFS (i chose this name as I will play with ceph in a varity of other ways too)

image

Add this to directory mappings

image

Configure docker host VMs to pass through

image

In each VM

In each VM

  • sudo mkdir /mnt/docker-cephFS/
  • sudo nano /etc/fstab
    • add #for virtiofs mapping docker-cephFS /mnt/docker-cephFS virtiofs defaults 0 0
    • save the file
  • sudo systemctl daemon-reload
  • sudo mount -a

Migrating Docker Swarm Stacks for exising Stack

basically its

  • stop the stack

  • mv the data from /mnt/gluster-vol1/dirname to /mnt/docker-cephFS/dirname

  • Edit the stack to change the volume defitions from my gluster defition to a local volume - this mean no editing of the service volme lines

Example from my wordpress stack

volumes:
  dbdata:
    driver: gluster-vol1
  www:
    driver: gluster-vol1

to

volumes:
  dbdata:
    driver: local
    driver_opts:
      type: none
      device: "/mnt/docker-cephFS/wordpress_dbdata"
      o: bind

  www:
    driver: local
    driver_opts:
      type: none
      device: "/mnt/docker-cephFS/wordpress_www"
      o: bind

  • triple check everything
  • restart the stack

if you get an error about the volumen already being defined you may need to delete the old volume defition by had - thi can easily be done in portainer or using the docker volume command

Backup

havent figured out an ideal strategy for backing up the cephFS on the host or from the vm - with glsuter the bricks were stored on a dedicated vdisk - this was backed up as part of the pbs backup of the vm

As the virtioFS is not presented as a disk this doesn't happen (this is reasonable as the cephFS is not VM specific)

@Drallas
Copy link

Drallas commented May 3, 2025

Yes i did dswarm01 is local on Proxmox the others are cloud. To make it more complex, it's running behind Tailscale no SSHD active only way in (Tailscale SHH and Cloudflare Tunnels), and docker swarm is running on the (internal) Tailscale interface.

@scyto
Copy link
Author

scyto commented May 3, 2025

interesting, i did some quick chatgpt (been using it a lot to make me scripts as i can't program, i have a great script that creates per service RBDs, the ceph keys, ceph secrets and creates me a script to deploy all the keys on a client, just waiting for the guy working on the ceph volume driver for docker....)

what database did you pick - and do you assume it will just be reliable or did you cluster it at the hoster providing the database?

https://chatgpt.com/share/68163190-74b4-800d-8f7a-393eaaad89ee

@Drallas
Copy link

Drallas commented May 3, 2025

I prefer not share that one, using a free service that might get overwhelmed, made public. I recommend investigating deeply, and you will stumble upon some interesting options. :)

@Drallas
Copy link

Drallas commented May 3, 2025

storage

@RonanAshby
Copy link

Great approach! VirtioFS with CephFS could boost performance significantly.

@scyto
Copy link
Author

scyto commented May 6, 2025

Great approach! VirtioFS with CephFS could boost performance significantly.

Yeah i had some very interesting perf results.... https://forum.proxmox.com/threads/i-want-to-like-virtiofs-but.164833/post-768186

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment