Skip to content

Instantly share code, notes, and snippets.

View scyto's full-sized avatar

scyto

  • Seattle, WA, USA
  • 06:51 (UTC -07:00)
View GitHub Profile

Enable any LAN client to access mesh

Version 0.5 (2025.04.29)

I have other devices that need to access the ceph mesh that are on my LAN. This gist is only needed if you want LAN clients to access the ceph mesh.

Goals

  • let any client on LAN access the mesh
  • avoid setting static routes on my router
  • enable support for routing topology changes without having to reconfigure router

Thanks for hanging tight —
Here’s the corrected, production-quality version based on everything you asked:


🛠 Thunderbolt Mesh Setup – Staged Guide (Proxmox + FRR + BGP)


📦 Stage 1 — Internal Mesh VM Routing Only

@scyto
scyto / routed-vm-mesh-access.md
Last active May 8, 2025 04:00
how to access proxmox ceph mesh from VMs on the same proxmox nodes

Give VMs Accesss to Ceph Mesh (routed not bridged access)

Version 0.9 (2025.04.29)

Routed is needed, you can't jut simply bridge en05 and en06 and have VMs work, bridging seems to not work on thundebolt interfaces, at least i could never get the interfaces working when bridged and it broke the ceph mesh completely.

tl;dr can't bridge thunderbolt interfaces

Goal

CephFS Mounting for Docker VMs (first draft)

2025.04.27 - currently untested e2e i had chatgpt make this for me based on the process i used with chatgpt to help figure it out so E&OE ...

This document describes the clean, final method to mount a CephFS filesystem for Docker VMs across your cluster.

Assumtions:

  • you have a working cephFS volume called docker (out of scope)
  • that you can see this just fine mounted on all 3 pve nodes (if you can't then this is never going to work)
  • that you are using the IPv6 version of my ceph proxmox setup

📚 Proxmox FRR OpenFabric IPv6 Initial Setup (fc00::/128 Design)


🔢 Overview

This document describes the original setup to establish an FRR (Free Range Routing) OpenFabric IS-IS based IPv6 routed mesh over Thunderbolt networking between Proxmox nodes, using static /128 loopback addresses in the fc00::/8 ULA space.

This provided:

@scyto
scyto / ceph-ip-migration-on-proxmox.md
Last active April 28, 2025 05:45
migrate ceph network from /128 IP addresses to /64

IF YOU FIND THIS GIST THIS IS A ROUGH GUIDE TO CHANING CEPH IP AND SUBNETS - MY ADVICE IS DONT IT IS BAD EVERR TIME AND THIS IS ROUGH STEPS IN REALITY ITS MISSING SOME STEPS

🌟 Proxmox Ceph IPv6 Monitor IP Migration Best Practices

I learn't an important lesson today, never ever remove ms_bind_ipv4 = false from ceph.conf or the cephFS will be fucked. note also recreating the mgrs and mds seems advisable too

only ever reboot one node, if that doesn work or you see libceph error storm when it reboot - solve that first (make sure no wrong mons defined in storage.cfg or ceph.conf)

@scyto
scyto / dual-stack-openfabric-mesh-v2.md
Last active May 11, 2025 16:25
New version of my mesh network using openfabric

Enable Dual Stack (IPv4 and IPv6) OpenFabric Routing

Version 2.5 (2025.04.27)

this gist is part of this series

This assumes you are running Proxmox 8.4 and that the line source /etc/network/interfaces.d/* is at the end of the interfaces file (this is automatically added to both new and upgraded installations of Proxmox 8.2).

This changes the previous file design thanks to @NRGNet and @tisayama to make the system much more reliable in general, more maintainable esp for folks using IPv4 on the private cluster network (i still recommend the use of the IPv6 FC00 network you will see in these docs)

@scyto
scyto / docker-cephfs-virtiofs.md
Last active May 11, 2025 20:17
Hypervisor Host Based CephFS pass through with VirtioFS

Using VirtioFS backed by CephFS for bind mounts

This is currently a work-in-progress documentation - rough notes for me, maybe missing a lot or wrong

The idea is to replace GlusterFS running inside the VM with storage on my cephfs cluster. This is my proxmox cluster and it runs both the storage and is the hypervisor for my docker VMs.

Other possible approaches:

  • ceph fuse clien in VM to mount cephFS or CephRBD over IP
  • use of ceph docker volume plugin (no useable version of this yet exists but it is being worked on)

Assumptions:

@scyto
scyto / nas-debian.md
Last active February 24, 2025 09:27
NAS-homebrew-install

Install Debian

non graphical, SSH and basic tools only

apt-get install nano sudo nfs samba-common

usermod -aG sudo [your-username]

switch to username all commands thereafter use sudo when needed

add contrib sources (why?)