Skip to content

Instantly share code, notes, and snippets.

@benitogf
Last active July 19, 2025 19:01
Show Gist options
  • Save benitogf/aa10ada1071c827aa5e012dba168ada7 to your computer and use it in GitHub Desktop.
Save benitogf/aa10ada1071c827aa5e012dba168ada7 to your computer and use it in GitHub Desktop.
ZFS mirror ubuntu boot drive
#!/bin/sh
# Assumptions and requirements
# - All drives will be formatted. These instructions are not suitable for dual-boot
# - No hardware or software RAID is to be used, these would keep ZFS from detecting disk errors and correcting them. In UEFI settings, set controller mode to AHCI, not RAID
# - These instructions are specific to UEFI systems and GPT. If you have an older BIOS/MBR system, please use https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS.html
# change the these disks variables to your disks paths (check with lsblk)
DISK1="/dev/nvme0n1"
DISK2="/dev/nvme1n1"
if [ "$(id -u)" -ne 0 ]; then
echo "Please run this script with sudo:"
echo "sudo $0 $*"
exit 1
fi
echo "Installing dependencies"
apt install -y gdisk mdadm grub-efi-amd64
echo "-- Create partitions on second drive"
echo "Change swap partition type"
sgdisk -t2:FD00 $DISK1
echo "Copy partition table from disk 1 to disk 2"
sgdisk -R $DISK2 $DISK1
echo "Change GUID of second disk"
sgdisk -G $DISK2
# this change seems to take a while to propagate :/
sleep 1
echo "-- Mirror boot pool"
echo "Get GUID of partition 3 on disk 2"
DISK1_PART3_GUID=$(sgdisk -i3 $DISK1 | grep "^Partition unique GUID:" | awk '{print tolower($4)}')
DISK2_PART3_GUID=$(sgdisk -i3 $DISK2 | grep "^Partition unique GUID:" | awk '{print tolower($4)}')
echo "DISK 1 PART 3 GUID"
echo $DISK1_PART3_GUID
echo "DISK 2 PART 3 GUID"
echo $DISK2_PART3_GUID
if [ -z $DISK1_PART3_GUID ]; then
echo "error: failed to get the disk1 part 3 guid"
exit 1
fi
if [ -z $DISK2_PART3_GUID ]; then
echo "error: failed to get the disk2 part 3 guid"
exit 1
fi
echo "attach partition to bpool"
zpool attach bpool $DISK1_PART3_GUID /dev/disk/by-partuuid/$DISK2_PART3_GUID || exit 1
# TODO: check for failure here by the zpool status not showing the mirror
echo "-- Mirror root pool"
DISK1_PART4_GUID=$(sgdisk -i4 $DISK1 | grep "^Partition unique GUID:" | awk '{print tolower($4)}')
DISK2_PART4_GUID=$(sgdisk -i4 $DISK2 | grep "^Partition unique GUID:" | awk '{print tolower($4)}')
if [ -z $DISK1_PART4_GUID ]; then
echo "error: failed to get the disk1 part 4 guid"
exit 1
fi
if [ -z $DISK2_PART4_GUID ]; then
echo "error: failed to get the disk2 part 4 guid"
exit 1
fi
echo "attach partition to rpool"
zpool attach rpool $DISK1_PART4_GUID /dev/disk/by-partuuid/$DISK2_PART4_GUID || exit 1
# TODO: check for failure here by the zpool status not showing the mirror
echo "-- Mirror Swap"
echo "remove existing swap"
swapoff -a || exit 1
echo "remove the swap mount line in /etc/fstab"
sed -i '/swap/d' /etc/fstab
echo "create software mirror drive for swap"
mdadm --create /dev/md0 --metadata=1.2 --level=mirror --raid-devices=2 ${DISK1}p2 ${DISK2}p2 || exit 1
echo "configure mirror drive for swap"
mkswap -f /dev/md0 || exit 1
echo "place mirror swap in fstab"
sh -c "echo UUID=$(sudo blkid -s UUID -o value /dev/md0) none swap discard 0 0 >> /etc/fstab"
# TODO: verify that line is in fstab cat /etc/fstab
echo "use the new swap"
swapon -a || exit 1
echo "-- Move grub menu to ZFS"
# TODO: verify that grub can see the ZFS boot pool grub-probe /boot
echo "create EFI file system on second disk"
mkdosfs -F 32 -s 1 -n EFI ${DISK2}p1
echo "remove /boot/grub from fstab"
sed -i '/grub/d' /etc/fstab
echo "umount /boot/grub"
umount /boot/grub
# TODO: Verify with df -h, /boot should be mounted on bpool/BOOT/ubuntu_UID, /boot/efi on /dev/sda1 or similar depending on device name of your first disk, and no /boot/grub
echo "remove /boot/grub"
rm -rf /boot/grub
echo "create ZFS datatset for grub"
zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
echo "refresh initrd files"
update-initramfs -c -k all
echo "disable memory zeroing to address a performance regression of ZFS on linux"
sed -i.bak "s/GRUB_CMDLINE_LINUX_DEFAULT=\"quiet splash\"/GRUB_CMDLINE_LINUX_DEFAULT=\"quiet splash init_on_alloc=0\"/g" /etc/default/grub
echo "update grub"
update-grub
echo "reload daemon"
systemctl daemon-reload
echo "install grub to the esp"
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=ubuntu --recheck --no-floppy
echo "disable grub-initrd-fallback.service"
systemctl mask grub-initrd-fallback.service
echo "DONE without errors"
dpkg-reconfigure grub-efi-amd64
@enoch85
Copy link

enoch85 commented Aug 4, 2023

Not tested but looks good, thanks!

If you publish it on GitHub as a repo instead one could contribute * hint hint * :)

@benitogf
Copy link
Author

benitogf commented Aug 9, 2023

@enoch85 noted, will make it to a repo after I do more testing on it, don't think its very stable 🍂

@philkunz
Copy link

Whats the status on that? Is it verified to be working well?

@kendallgreen
Copy link

my Ubuntu 22.04 ZFS installation using the Ubuntu installer has 5 partitions on the disk, not 4. They are p1 1024K EF02 , p2 513K EF00, p3 2G 8200, p4 2G BE00 and p5 949G BF00. P5 is the rpool, p4 is the bpool, p3 is swap, p2 is EFI System Partition and p1 is BIOS boot. I can't copy this disk to another of the same size using sgdisk -R disk2 disk1.

@steveatitg
Copy link

I found on Ubuntu 22.04 that this needs a few changes.

  1. The partition indicator on line 107 (p1) should be -part1 for 22.04
  2. The partitition indicators on line 89 (p2 twice) should be -part2 for 22.04
  3. Benefits from partprobe around line 34 to update the OS state of the partition table

@boekhold
Copy link

I found on Ubuntu 22.04 that this needs a few changes.

  1. The partition indicator on line 107 (p1) should be -part1 for 22.04
  2. The partitition indicators on line 89 (p2 twice) should be -part2 for 22.04
  3. Benefits from partprobe around line 34 to update the OS state of the partition table

The partition indicator really depends on the type of disk. If you have NVMe drives, it's p1, p2 etc. If you have SATA drives, it's just 1, 2 etc. This could possibly be handled by a "PARTITION_INDICATOR_PREFIX" (PIP?) env variable at the top.

@boekhold
Copy link

Ubuntu 2025.04. MATE edition:

Partition 1: BIOS Boot
Partition 2: boot pool
Partition 3: swap
Partition 4: root pool

Lots of different datasets configured here out-of-the-box:

$ zfs list -r bpool rpool
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
bpool                                              108M  1.64G    96K  /boot
bpool/BOOT                                         108M  1.64G    96K  none
bpool/BOOT/ubuntu_5ipala                           108M  1.64G   108M  /boot
rpool                                             4.81G  13.6G    96K  /
rpool/ROOT                                        4.80G  13.6G    96K  none
rpool/ROOT/ubuntu_5ipala                          4.80G  13.6G  3.76G  /
rpool/ROOT/ubuntu_5ipala/srv                        96K  13.6G    96K  /srv
rpool/ROOT/ubuntu_5ipala/usr                       224K  13.6G    96K  /usr
rpool/ROOT/ubuntu_5ipala/usr/local                 128K  13.6G   128K  /usr/local
rpool/ROOT/ubuntu_5ipala/var                      1.04G  13.6G    96K  /var
rpool/ROOT/ubuntu_5ipala/var/games                  96K  13.6G    96K  /var/games
rpool/ROOT/ubuntu_5ipala/var/lib                  1.03G  13.6G   943M  /var/lib
rpool/ROOT/ubuntu_5ipala/var/lib/AccountsService   100K  13.6G   100K  /var/lib/AccountsService
rpool/ROOT/ubuntu_5ipala/var/lib/NetworkManager    136K  13.6G   136K  /var/lib/NetworkManager
rpool/ROOT/ubuntu_5ipala/var/lib/apt              71.8M  13.6G  71.8M  /var/lib/apt
rpool/ROOT/ubuntu_5ipala/var/lib/dpkg             44.9M  13.6G  44.9M  /var/lib/dpkg
rpool/ROOT/ubuntu_5ipala/var/log                  1.85M  13.6G  1.85M  /var/log
rpool/ROOT/ubuntu_5ipala/var/mail                   96K  13.6G    96K  /var/mail
rpool/ROOT/ubuntu_5ipala/var/snap                 1.45M  13.6G  1.45M  /var/snap
rpool/ROOT/ubuntu_5ipala/var/spool                 112K  13.6G   112K  /var/spool
rpool/ROOT/ubuntu_5ipala/var/www                    96K  13.6G    96K  /var/www
rpool/USERDATA                                    1.98M  13.6G    96K  none
rpool/USERDATA/home_qe47zj                        1.68M  13.6G  1.68M  /home
rpool/USERDATA/root_qe47zj                         212K  13.6G   212K  /root

@boekhold
Copy link

boekhold commented Jun 14, 2025

Ubuntu 2025.04. MATE edition:

Partition 1: BIOS Boot Partition 2: boot pool Partition 3: swap Partition 4: root pool

I'm absolutely baffled. The above 4 partitions where created when I installed Ubuntu MATE 25.04 with ZFS on a 4GB RAM KVM VM. Today I re-installed this on an 8GB RAM KVM VM, and I only have three partitions:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  
   2            4096         3719167   1.8 GiB     8300  
   3         3719168        41940991   18.2 GiB    8300  

where partition 2 is the boot pool and partition 3 the root pool. And there is no swap enabled whatsoever.

@bertelschmitt
Copy link

I have tested this script on an Ubuntu 24.04 zfs boot drive, created by the Ubuntu installer.
Source drive is /dev/nvme2n1, target drive is /dev/nvme3n1
The script sets up the partitions on the target drive, but then errors out with a

"cannot attach /dev/disk/by-partuuid/cc88bd42-d537-496f-a4af-dbe62b7211ce to ef57d783-a975-42bd-a5cf-25b0dab6a4fd: no such device in pool."

The uuids on both drives are as follows:

Source /dev/nvme2n1p3: UUID="c807808b-4951-41a3-b977-8f44e4454a75" TYPE="swap" PARTUUID="ef57d783-a975-42bd-a5cf-25b0dab6a4fd"
Target /dev/nvme3n1p3: PARTUUID="cc88bd42-d537-496f-a4af-dbe62b7211ce"

The pools on the source drive are by-id, not by-uuid.

The script creates a partition 3 of unknown type on the target drive.
The corresponding partition 3 on the source drive is a swap partition.

I could finish the job by manually attaching the respective partition of the target drive to the corresponding partition of the source drive, resulting in:

  pool: bpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 148M in 00:00:00 with 0 errors on Sun Jul 20 03:07:34 2025
config:

        NAME                                                     STATE     READ WRITE CKSUM
        bpool                                                    ONLINE       0     0     0
          mirror-0                                               ONLINE       0     0     0
            nvme-KIOXIA-EXCERIA_G2_SSD_62DFC06XFM95-part2        ONLINE       0     0     0
            nvme-SOLIDIGM_SSDPFKKW020X7_SSC2N41421160255E-part2  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: resilvered 107G in 00:02:00 with 0 errors on Sun Jul 20 03:14:59 2025
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            nvme-eui.00000000000000008ce38e0500bc28fa-part4  ONLINE       0     0     0
            nvme-eui.aca32f03150082e52ee4ac0000000001-part4  ONLINE       0     0     0

errors: No known data errors

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment