Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active April 27, 2025 19:29
Show Gist options
  • Save scyto/64e79a694b286d3b70f8b3663d19eb76 to your computer and use it in GitHub Desktop.
Save scyto/64e79a694b286d3b70f8b3663d19eb76 to your computer and use it in GitHub Desktop.
migrate ceph network from /128 IP addresses to /64

🌟 Proxmox Ceph IPv6 Monitor IP Migration Best Practices

I learn't an important lesson today, never ever remove ms_bind_ipv4 = false from ceph.conf or the cephFS will be fucked. note also recreating the mgrs and mds seems advisable too

only ever reboot one node, if that doesn work or you see libceph error storm when it reboot - solve that first (make sure no wrong mons defined in storage.cfg or ceph.conf)


πŸ“š Purpose

This document describes a safe, production-grade method for migrating a Proxmox+Ceph cluster to new /64 IPv6 loopback addresses for monitor daemons (MONs) without downtime, following best practices and using native Proxmox tools (pveceph, GUI).


πŸ”Ž Overview

  • Old configuration: MONs bound to /128 loopback IPv6 addresses (e.g., fc00::81/128)
  • New configuration: MONs bound to /64 routed loopback IPv6 addresses (e.g., fc00:81::1/64)
  • Goal: Gracefully migrate monitors to new IPs with no client disruption
  • Cluster type: Proxmox nodes are the only Ceph clients (no external clients)

πŸ”’ Migration Plan


βœ… Phase 1: Prepare Networking (Linux)

On each node (pve1, pve2, pve3):

Edit /etc/network/interfaces.d/thunderbolt:

# Loopback for Ceph MON
auto lo
iface lo inet loopback
    up ip -6 addr add fc00::81/128 dev lo
    up ip -6 addr add fc00:81::1/64 dev lo

Apply live without reboot:

ip -6 addr add fc00:81::1/64 dev lo

Verify:

ip -6 addr show lo

βœ… Confirm full FRR/OpenFabric routing.


βœ… Phase 2: Update Ceph Networks

Immediately after networking is expanded, update /etc/pve/ceph.conf:

[global]
public_network = fc00::/8
cluster_network = fc00::/8

βœ… No Ceph restart needed yet.


βœ… Phase 3: Migrate MONs One-by-One

Per Node:

(a) Create New MON on New IP

pveceph mon create --mon-address fc00:81::1

βœ… Wait for MON to become healthy.

(b) Delete Old MON

Using GUI or CLI:

pveceph mon destroy pve1

βœ… Clean removal from MONmap.

(c) (Optional) Reload ceph-mgr

systemctl reload [email protected]

βœ… Forces GUI refresh if needed.


βœ… Phase 4: Repeat for All Nodes

  • pve2:
    pveceph mon create --mon-address fc00:82::1
    pveceph mon destroy pve2
  • pve3:
    pveceph mon create --mon-address fc00:83::1
    pveceph mon destroy pve3

βœ… Phase 5: (Optional) Clean Up Old /128 Addresses

After full migration:

  • Edit /etc/network/interfaces.d/thunderbolt
  • Remove old /128 address lines.
  • Keep only /64 addresses.

Apply:

ifreload -a

πŸ“‹ Why This Process Works

Feature Benefit
Dual IPs during migration No disruption to MON binding
pveceph mon create Clean MON creation, automatic key/cert
pveceph mon destroy Clean MON removal from MONmap
No manual monmaptool needed Safer, faster
No client downtime Proxmox internal clients auto-adapt

πŸ“£ Important Clarifications

  • Brackets [ ] not needed around IPv6 in mon_host for Proxmox ceph.conf
  • Only Proxmox nodes are Ceph clients β†’ no need to update external clients
  • Quorum preserved throughout the migration

πŸ“… Example Timeline

Time Action
T+0m Add /64 loopbacks
T+5m Update public_network, cluster_network
T+10m Create new MONs
T+20m Remove old MONs
T+30m Confirm HEALTH_OK
T+35m Clean old /128 loopback (optional)

πŸš€ Result

  • Pure /64 routed IPv6 Proxmox+Ceph cluster
  • Fully healthy MON set
  • Zero downtime migration
  • Future IPv6 client-ready
  • Production-grade best practice
@scyto
Copy link
Author

scyto commented Apr 27, 2025

note this isn't 100% complete - one had to also destroy and recreate the mgrs and the mds
libceph (the kernel client) can get very confused during this process - this will block cephFS volumes from being mounted until your troubleshoot why its still picking the wrong mons....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment