This guide documents how to migrate your Proxmox bootable ZFS mirror (rpool) from two NVMe drives to two SATA SSDs of the same size, preserving bootability via UEFI.
From:
- ZFS mirror on:
/dev/nvme0n1p3,/dev/nvme1n1p3 - UEFI boot partitions:
/dev/nvme0n1p2,/dev/nvme1n1p2
To:
- ZFS mirror on:
/dev/sda3,/dev/sdb3 - UEFI boot partitions:
/dev/sda2,/dev/sdb2
Find the SATA Disk you want to use, e.g. a pair of Samsung EVO 860 SSDs:
fdisk -lExample output:
...
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
...
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 860
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Here we need to note down the devices: sda and sdb
Now identify the ZFS devices on the NVMe drives:
zpool statusExample output:
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-eui.e8238fa6bf530001001b444a41c9bec1-part3 ONLINE
nvme-eui.e8238fa6bf530001001b444a410fd2c1-part3 ONLINE
These nvme-eui.* entries are what interest us; we will need them later on.
If you didn't already find your NVMe drives in the output of fdisk -l, you can resolve them using readlink -f:
readlink -f /dev/disk/by-id/nvme-eui.e8238fa6bf530001001b444a41c9bec1-part3
/dev/nvme1n1p3readlink -f /dev/disk/by-id/nvme-eui.e8238fa6bf530001001b444a410fd2c1-part3
/dev/nvme0n1p3Now we need to wipe the partition table of the SATA SSDs:
Warning: This is destructive. Ensure /dev/sda and /dev/sdb are the correct targets.
sgdisk --zap-all /dev/sda
sgdisk --zap-all /dev/sdb
wipefs -a /dev/sda
wipefs -a /dev/sdbUse one of the existing bootable NVMe drives (e.g. /dev/nvme0n1) as the template:
sgdisk /dev/nvme0n1 -R=/dev/sda
sgdisk /dev/nvme0n1 -R=/dev/sdbRandomize disk GUIDs to avoid duplication:
sgdisk -G /dev/sda
sgdisk -G /dev/sdbzpool replace -f rpool nvme-eui.e8238fa6bf530001001b444a41c9bec1-part3 /dev/sda3
watch zpool statusWait for resilvering to complete before proceeding.
zpool replace -f rpool nvme-eui.e8238fa6bf530001001b444a410fd2c1-part3 /dev/sdb3
watch zpool statusWait again until resilvering completes.
Assuming the cloned partitions created /dev/sda2 and /dev/sdb2 for ESP:
proxmox-boot-tool format /dev/sda2
proxmox-boot-tool format /dev/sdb2Then initialize the bootloader:
proxmox-boot-tool init /dev/sda2
proxmox-boot-tool init /dev/sdb2Check the current status of all boot entries:
proxmox-boot-tool statusUse blkid to verify that the entries are correct:
blkid /dev/sda2 /dev/sdb2You should see both /dev/sda2 and /dev/sdb2 listed with UEFI versions.
You can also inspect more details with:
lsblk -o NAME,FSTYPE,UUID,MOUNTPOINT /dev/sda /dev/sdb- Reboot the system, Proxmox has already set the new pool as default Boot device.
- Once booted, confirm the pool consist of your SATA Drives:
zpool status
pool: rpool
state: ONLINE
scan: resilvered 29.0G in 00:01:03 with 0 errors on Sat Jul 12 15:20:37 2025
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sdb3 ONLINE 0 0 0Remove stale entries from Proxmox kernel boot UUIDs:
nano /etc/kernel/proxmox-boot-uuidsRemove lines referencing the old NVMe ESPs.
and run:
proxmox-boot-tool refreshOnce the system is booting from the SATA SSDs, wipe and create a new mirrored pool on the NVMe devices:
sgdisk --zap-all /dev/nvme0n1
sgdisk --zap-all /dev/nvme1n1
wipefs -a /dev/nvme0n1
wipefs -a /dev/nvme1n1Use the Proxmox Web UI to create a new ZFS mirror and configure it as additional storage.
You now should have a bootable Proxmox system running from a ZFS mirror on SATA SSDs, and the NVMe drives available as fast mirrored storage.
Sources: