Part of collection: Hyper-converged Homelab with Proxmox
Virtio-fs is a shared file system that lets virtual machines access a directory tree on the host. Unlike existing approaches, it is designed to offer local file system semantics and performance. The new virtiofsd-rs Rust daemon Proxmox 8 uses, is receiving the most attention for new feature development.
Performance is very good (while testing, almost the same as on the Proxmox host)
VM Migration is not possible yet, but it's being worked on!
Since I have a Proxmox High Available cluster with Ceph, I like to mount the Ceph File System, with CephFS Posix-compliant directories into my VM’s. I have been playing around with LXC container and Bind Mounts and even successfully setup Docker Swarm in LXC Containers. Unfortunately, this is not a recommended configuration and comes with some trade-offs and cumbersome configuration settings.
This Write-Up explains how to Create Erasure Coded CephFS Pools to store Volumes that than can be mounted into a VM via virtiofs.
| This procedure has been tested with Ubuntu Server 22.04 and Debian 12!
Proxmox 8 Nodes, don’t have virtiofsd installed by default, so the first step is to install it.
apt install virtiofsd -y
# Check the version
/usr/lib/kvm/virtiofsd --version
virtiofsd backend 1.7.0
virtiofsd 1.7.0 has many issues (hangs after rebooting the vm, superblock errors etc...) version 1.7.2 and 1.8.0 seems to work much better, it can be found at virtio-fs releases page. But be carefull this package is not considered stable and not even in unstable Debian Package Tracker.
Still on the Proxmox host!
Get the Hookscript files files and copy them to /var/lib/vz/snippets
, and make virtiofs_hook.pl
executable.
Or use the get-hookscript.sh script to download the scripts files automatically to /var/lib/vz/snippets
.
cd ~/
sudo sh -c "wget https://raw.githubusercontent.com/Drallas/Virtio-fs-Hookscript/main/get_hook_script.sh"
sudo chmod +x ~/get-hook%20script.sh
./get-hook%20script.sh
To set the VMID and the folders that a VM needs to mount, open the virtiofs_hook.conf file.
sudo nano /var/lib/vz/snippets/virtiofs_hook.conf
Set the hookscript to a VM.
qm set <vmid> --hookscript local:snippets/virtiofs_hook.pl
That's it, when it's added to the VM, the script does it magic on VM boot:
- Adding the correct Args section to the virtiofsd
args: -object memory-backend-memfd,id=mem,size=4096M,share=on -numa node........
- Creating the sockets that are needed for the folders.
- Cleanup on VM Shutdown
The VM can now be started and the hookscript takes care of the virtiofsd part.
qm start <vmid>
Check the processes virtiofsd ps aux | grep virtiofsd
or systemctl | grep virtiofsd
for the systemd services.
If all is good, it looks like this:
Linux kernel >5.4 inside the VM, supports Virtio-fs natively
Mounting is in the format:
mount -t virtiofs <tag> <local-mount-point>
To find the tag
On the Proxmox host; Exucute qm config <vmid> --current
and look for the tag=xxx-docker
inside the args section args: -object memory-backend-memfd,id=mem,size=4096M,share=on -numa node,memdev=mem -chardev socket,id=char1,path=/run/virtiofsd/xxx-docker.sock -device vhost-user-fs-pci,chardev=char1,tag=<vmid>-<appname>
# Create a directory
sudo mkdir -p /srv/cephfs-mounts/<foldername>
# Mount the folder
sudo mount -t virtiofs mnt_pve_cephfs_multimedia /srv/cephfs-mounts/<foldername>
# Add them to /etc/fstab
sudo nano /etc/fstab
# Mounts for virtiofs
# The nofail option is used to prevent the system to hang if the mount fails!
<vmid>-<appname> /srv/cephfs-mounts/<foldername> virtiofs defaults,nofail 0 0
# Mount everything from fstab
sudo systemctl daemon-reload && sudo mount -a
# Verify
ls -lah /srv/cephfs-mounts/<vmid>-<appname>
- New Vm's tend to trow a 'superblock' error on first boot:
mount: /srv/cephfs-mounts/download: wrong fs type, bad option, bad superblock on mnt_pve_cephfs_multimedia, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
To solve this, I poweroff the vm sudo /sbin/shutdown -HP now
and then start it again from the host with qm start <vmid>
, everything should mount fine now.
- Adding an extra volume throws also a 'superblock' error.
qm stop <vmid>
sudo nano /etc/pve/qemu-server/<vmid>.conf
# Remove the Arg entry
`args: -object memory-backend-memfd,id=mem,size=4096M,share=on..
qm start <vmid>
Now the Volume's all have a superblock error; I poweroff the vm sudo /sbin/shutdown -HP now
and then start it again from the host with qm start <vmid>
, everything should mount fine again.
To remove Virtio-fs from a VM and from the host:
nano /etc/pve/qemu-server/xxx.conf
# Remove the following lines
hookscript: local:snippets/virtiofs-hook.pl
args: -object memory-backend-memfd,id=mem,size=4096M,share=on..
Disable each virtiofsd-xxx service, replace xxx with correct values or use (* wildcard) to remove them all at once.
systemctl disable virtiofsd-xxx
sudo systemctl reset-failed virtiofsd-xxx
This should be enough, but if the reference persist:
# Remove leftover sockets and services.
rm -rf /etc/systemd/system/virtiofsd-xxx
rm -rf /etc/systemd/system/xxx.scope.requires/
rmdir /sys/fs/cgroup/system.slice/'system-virtiofsd\xxx'
If needed reboot the Host, to make sure all references are purged from the system state.
- [TUTORIAL] virtiofsd in PVE 8.0.x
- Sharing filesystems with virtiofs between multiple VMs
- virtiofsd - vhost-user virtio-fs device backend written in Rust
I don’t see any of this, my nodes get often rebooted at random, and everything keeps on running without issues. Only had a corrupted FreshRSS db once..