You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
Instantly share code, notes, and snippets.
🎈
Drallas
🎈
From ClickOps to GitOps, IaC and Cloud-Native skills. Interested in Private Clouds, Opensource, Linux, Ansible, Docker (Swarm), K8S, Proxmox, Bash & Python.
One of the objectives, of building my Proxmox HA Cluster, was to store persistent Docker volume data inside CephFS folders.
There are many different options to achieve this; via Docker Swarm in LXC using Bind Mounts, Docker Third Party Plugins that are hard to use and often outdated.
Another option for Docker Volumes was running GlusterFS, storing the disks on local NVMe storage and not using CephFS. Although appealing, it's adding complexity and unnecessary resource consumption, while I already have a High Available File System (CephFS) running!
This is part 2 focussing on building the Proxmox Cluster and setting up Ceph.
See also Part 1 about Setup Networking for a High Available cluster with Ceph, and see Part 2 for how to setup the Proxmox and Ceph Cluster itself, and part 3 focussing on Managing and Troubleshooting Proxmox and Ceph.
If everything went well in part 1, setting up Proxmox and Ceph should be 'a walk in the park'!
Hyper-converged High Available Homelab with Proxmox
This is me documenting my journey moving my Homelab from a Qnap NAS and a Single host Proxmox server to a Hyper-converged multi-node Proxmox Cluster.
The reason to document it here is twofold:
Information often it scattered 'all over the place', but never 100% applicable to the setup I have.
To remember 'what the fuck' did I do some months ago.
Writing it for 'a public' forces me to think it all through again and make sure it's correct.
It's written 'first to scratch my own itch' but hopefully it benefits others too, or even better, that others improve upon my implementations. Feel free to comment or share improvements and insights!
How to create a Erasure Coded Pool in Ceph and use 'directory pinning' to connect it to the CephFS filesystem.
To use a Erasure Coded Pool with CephFS, a directory inside the CephFS filesystem needs to be connected to a Erasure Coded Pool, this is called 'directory pinning'.
Virtio-fs is a shared file system that lets virtual machines access a directory tree on the host. Unlike existing approaches, it is designed to offer local file system semantics and performance. The new virtiofsd-rs Rust daemon Proxmox 8 uses, is receiving the most attention for new feature development.
Performance is very good (while testing, almost the same as on the Proxmox host)
VM Migration is not possible yet, but it's being worked on!
After struggling for some days, and since I really needed this to work (ignoring the it can't be done vibe everywhere), I managed to get Docker to work reliable in privilegedDebian 12 LXC Containers on Proxmox 8
(Unfortunately, I couldn't get anything to work in unprivileged LXC Containers)
There are NO modifications required on the Proxmox host or the /etc/pve/lxc/xxx.conf file; everything is done on the Docker Swarm host. So the only obvious candidate who could break this setup, are future Docker Engine updates!