Skip to content

Instantly share code, notes, and snippets.

@mietzen
Last active July 23, 2025 20:36
Show Gist options
  • Save mietzen/822b59bf71050bc90740cb36b33706fd to your computer and use it in GitHub Desktop.
Save mietzen/822b59bf71050bc90740cb36b33706fd to your computer and use it in GitHub Desktop.
Step-by-step guide to migrating a Proxmox bootable ZFS mirror from NVMe to SATA SSDs while preserving UEFI boot

Migrate Proxmox ZFS Boot Pool (rpool) from NVMe to SATA SSDs

This guide documents how to migrate your Proxmox bootable ZFS mirror (rpool) from two NVMe drives to two SATA SSDs of the same size, preserving bootability via UEFI.

Summary

From:

  • ZFS mirror on: /dev/nvme0n1p3, /dev/nvme1n1p3
  • UEFI boot partitions: /dev/nvme0n1p2, /dev/nvme1n1p2

To:

  • ZFS mirror on: /dev/sda3, /dev/sdb3
  • UEFI boot partitions: /dev/sda2, /dev/sdb2

Step 1: Identify Disks and ZFS Devices

List devices:

Find the SATA Disk you want to use, e.g. a pair of Samsung EVO 860 SSDs:

fdisk -l

Example output:

...
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
...
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt

Here we need to note down the devices: sda and sdb

Identify current ZFS members:

Now identify the ZFS devices on the NVMe drives:

zpool status

Example output:

pool: rpool
 state: ONLINE
config:

	NAME                                                 STATE     READ WRITE CKSUM
	rpool                                                ONLINE       0     0     0
	  mirror-0                                           ONLINE       0     0     0
	    nvme-eui.e8238fa6bf530001001b444a41c9bec1-part3  ONLINE
	    nvme-eui.e8238fa6bf530001001b444a410fd2c1-part3  ONLINE

These nvme-eui.* entries are what interest us; we will need them later on. If you didn't already find your NVMe drives in the output of fdisk -l, you can resolve them using readlink -f:

readlink -f /dev/disk/by-id/nvme-eui.e8238fa6bf530001001b444a41c9bec1-part3
/dev/nvme1n1p3
readlink -f /dev/disk/by-id/nvme-eui.e8238fa6bf530001001b444a410fd2c1-part3
/dev/nvme0n1p3

Step 2: Wipe Target Disks

Now we need to wipe the partition table of the SATA SSDs:

Warning: This is destructive. Ensure /dev/sda and /dev/sdb are the correct targets.

sgdisk --zap-all /dev/sda
sgdisk --zap-all /dev/sdb
wipefs -a /dev/sda
wipefs -a /dev/sdb

Step 3: Clone Partition Table from Existing NVMe Drive

Use one of the existing bootable NVMe drives (e.g. /dev/nvme0n1) as the template:

sgdisk /dev/nvme0n1 -R=/dev/sda
sgdisk /dev/nvme0n1 -R=/dev/sdb

Randomize disk GUIDs to avoid duplication:

sgdisk -G /dev/sda
sgdisk -G /dev/sdb

Step 4: Replace ZFS Mirror Devices

Replace the first NVMe member:

zpool replace -f rpool nvme-eui.e8238fa6bf530001001b444a41c9bec1-part3 /dev/sda3
watch zpool status

Wait for resilvering to complete before proceeding.

Replace the second NVMe member:

zpool replace -f rpool nvme-eui.e8238fa6bf530001001b444a410fd2c1-part3 /dev/sdb3
watch zpool status

Wait again until resilvering completes.

Step 5: Format and Initialize UEFI Boot Partitions

Assuming the cloned partitions created /dev/sda2 and /dev/sdb2 for ESP:

proxmox-boot-tool format /dev/sda2
proxmox-boot-tool format /dev/sdb2

Then initialize the bootloader:

proxmox-boot-tool init /dev/sda2
proxmox-boot-tool init /dev/sdb2

Step 6: Verify Boot Configuration

Check the current status of all boot entries:

proxmox-boot-tool status

Use blkid to verify that the entries are correct:

blkid /dev/sda2 /dev/sdb2

You should see both /dev/sda2 and /dev/sdb2 listed with UEFI versions.

You can also inspect more details with:

lsblk -o NAME,FSTYPE,UUID,MOUNTPOINT /dev/sda /dev/sdb

Step 7: Test Boot

  1. Reboot the system, Proxmox has already set the new pool as default Boot device.
  2. Once booted, confirm the pool consist of your SATA Drives:
zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 29.0G in 00:01:03 with 0 errors on Sat Jul 12 15:20:37 2025
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda3    ONLINE       0     0     0
	    sdb3    ONLINE       0     0     0

Step 8: Clean up

Remove stale entries from Proxmox kernel boot UUIDs:

nano /etc/kernel/proxmox-boot-uuids

Remove lines referencing the old NVMe ESPs.

and run:

proxmox-boot-tool refresh

Step 9: Reuse NVMe Drives for ZFS Storage

Once the system is booting from the SATA SSDs, wipe and create a new mirrored pool on the NVMe devices:

sgdisk --zap-all /dev/nvme0n1
sgdisk --zap-all /dev/nvme1n1
wipefs -a /dev/nvme0n1
wipefs -a /dev/nvme1n1

Use the Proxmox Web UI to create a new ZFS mirror and configure it as additional storage.

Result

You now should have a bootable Proxmox system running from a ZFS mirror on SATA SSDs, and the NVMe drives available as fast mirrored storage.


Sources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment