Skip to content

Instantly share code, notes, and snippets.

@cyphar
Last active June 14, 2025 09:17
Show Gist options
  • Save cyphar/1abf05f3c13c6ccf1744b8a356f5f44c to your computer and use it in GitHub Desktop.
Save cyphar/1abf05f3c13c6ccf1744b8a356f5f44c to your computer and use it in GitHub Desktop.
Setup script for openSUSE Leap with raid1 mirrored boot drives.
#!/bin/bash
# opensuse-install-zroot: install openSUSE (Leap) on mirrored drives with ZFS root
# Copyright (C) 2025 Aleksa Sarai <[email protected]>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# NOTE: A fair amount of this script was based on previous writing on the
# topic, including the OpenZFS docs
#
# <https://openzfs.github.io/openzfs-docs/Getting%20Started/openSUSE/openSUSE%20Leap%20Root%20on%20ZFS.html>
#
# and some openSUSE users
#
# <https://fy.blackhats.net.au/blog/2024-09-07-opensuse-on-zfs/>
#
# However, the proposed setups either did not completely configure raid1 for
# all partitions (including the /boot/EFI partition) and/or were missing raid1
# swap. The goal of this setup is to produce a setup where either entire drive
# can easily be replaced completely. Some of the notes on raid1 /boot/efi were
# loosely based on
#
# <https://std.rocks/gnulinux_mdadm_uefi.html>
# The usage is ./opensuse-install-zroot.sh <disk1-id> <disk2-id>, where the
# given IDs are the associcated names in /dev/disk/by-id/ for the whole disk
# (not a partition).
set -Eueo pipefail
shopt -s nullglob
OPENSUSE_LEAP_VERSION=15.6
bail() {
echo "ERROR:" "$@" >&2
exit 1
}
usage() {
[ "$#" -eq 0 ] || echo "ERROR:" "$@" >&2
cat >/dev/stderr <<-EOH
usage: sudo $0 [--boot <boot-type=xfs-grub>] <disk1-id> <disk2-id>
This tool will install openSUSE Leap $OPENSUSE_LEAP_VERSION to the provided
disks, WIPING ANY OLD DATA HELD ON THEM. I strongly recommend only ever
running this script on a blank openSUSE Leap installation (or LiveCD) with
any other important disks disconnected from the system.
--boot <boot-type> Specify how /boot should be handled. <boot-type>
must be one of the following values:
zboot-grub - use a separate ZFS pool for /boot,
with GRUB as the bootloader
xfs-grub - use a separate XFS md-raid for /boot,
with GRUB as the bootloader
At time of writing, zboot-grub and xfs-grub do not
appear to work with GRUB due to known issues with
grub2-probe and grub2-install.
The disks are referenced by their names in /dev/disk/by-id/ (not their
symbolic names such as /dev/sda). I would strongly recommend using the ATA
names to reduce confusion as much as possible (i.e. ata-FooBar-ABC).
Note that openSUSE Leap 15.x and openSUSE Tumbleweed appear to have
different package database formats, which will cause errors like
warning: Found NDB Packages.db database while attempting bdb backend: using ndb backend.
if you try to use an openSUSE Tumbleweed host to install openSUSE Leap. You
should use an openSUSE Leap host (whether it's a LiveCD or proper install)
to use this script.
EOH
exit_code=0
[ "$#" -gt 0 ] && exit_code=1
exit "$exit_code"
}
GETOPT="$(getopt -o h --long help,boot: -- "$@")"
eval set -- "$GETOPT"
boot_type=xfs-grub
while true; do
case "$1" in
--boot) boot_type="$2"; shift 2 ;;
--) shift; break ;;
-h) usage; ;;
esac
done
case "$boot_type" in
zboot-grub | xfs-grub) ;;
*) usage "unknown --boot option $boot_type" ;;
esac
[ "$(id -u)" == 0 ] || usage "this tool must be run as root"
[ "$#" == 2 ] || usage "must pass exactly two disk arguments"
# TODO: Support != 2 disk setups. For 1 disk we need to remove the "mirror"
# vdev stuff (and see if mdraid will let us set up a 1-disk raid1). For >2 disk
# setups we just need to add a few more loops and things.
DISK1="/dev/disk/by-id/$1"
DISK2="/dev/disk/by-id/$2"
cat >/dev/stderr <<EOF
WARNING: YOU ARE ABOUT TO REMOVE ALL DATA FROM THESE TWO DISKS:
* $DISK1
* $DISK2
This action is IRREVERSIBLE. PLEASE CONFIRM THESE ARE THE DISKS YOU WANT TO
USE and that they CONTAIN NO IMPORTANT DATA.
Press ENTER to continue, otherwise press Ctrl-C to exit.
EOF
read -r
# Explictly say what we're doing.
set -x
# Needed stuff.
zypper install -y tpm2.0-tools zfs zfs-ueficert
# Make sure ZFS is set up on this system.
modprobe zfs
ROOT="$(mktemp -d /zfsroot.XXXXXX)"
########################################
# CONFIGURE PARTITIONS AND FILESYSTEMS #
########################################
# Make sure any old live mdraid drives are gone.
mdadm --stop /dev/md* ||:
# Close any pools with the name zroot and zboot.
zpool export zroot ||:
zpool export zboot ||:
# Remove any old LUKS containers called zroot-crypt*.
for old_luks_name in /dev/mapper/zroot-crypt*
do
old_luks_name="$(basename "$old_luks_name")"
cryptsetup close "$old_luks_name" ||:
done
for disk in "$DISK1" "$DISK2"
do
# Wipe any possible mdraid superblocks (not sure if wipefs can find them).
mdadm --zero-superblock "$disk"-part* ||:
# Wipe any outstanding filesystem headers to stop auto-loading and warnings
# when formatting.
wipefs -a -f "$disk"{,-part*}
# Nuke the partition table.
sgdisk --zap-all "$disk"
# Reload parititons.
partprobe "$disk"
done
efi_part=1
swap_part=2
boot_part=3
zroot_part=4
# TODO: Should we use LVM for swap/boot, so we can resize them later? The
# downside is that LVM raid is more opaque than mdraid (even though according
# to lvmraid(7), it is backed by mdraid) and I don't really like mixing
# encrypted swap and non-encrypted /boot on the same PV.
# Configure the disk partitions.
for disk in "$DISK1" "$DISK2"
do
# /boot/efi (mdraid)
sgdisk -n "$efi_part:0:+2G" -c "$efi_part:/boot/efi" -t "$efi_part:ef00" "$disk"
# cryptswap (mdraid)
sgdisk -n "$swap_part:0:+32G" -c "$swap_part:cryptswap" -t "$swap_part:8200" "$disk"
case "$boot_type" in
xfs-grub)
# /boot drive (mdraid + xfs)
sgdisk -n "$boot_part:0:+16G" -c "$boot_part:/boot" -t "$boot_part:ea00" "$disk"
;;
zboot-grub)
# /boot drive (ZFS)
sgdisk -n "$boot_part:0:+16G" -c "$boot_part:/boot" -t "$boot_part:be00" "$disk"
;;
*)
bail "unhandled $boot_type"
;;
esac
# root zpool (*LUKS* encrypted, with TPM-enrolled key)
sgdisk -n "$zroot_part:0:0" -c "$zroot_part:zroot" -t "$zroot_part:8304" "$disk"
# Reload parititons.
partprobe "$disk"
done
# My laptop sometimes needs this sleep to load the partitions (even though we
# use partprobe). Go figure...
until [ -e "$DISK1-part1" ] && [ -e "$DISK2-part1" ]
do
sleep 0.2s
done
# Configure mdraid for efi. UEFI can't handle mdraid, but luckily mdraid 0.90
# metadata is stored at the end of the filesystem and so "dumb" filesystem
# implementations (like those in edk2) will happily treat the EFI partition as
# valid.
mdadm --create \
--name="boot-efi" \
--metadata=0.90 --level=mirror --raid-devices=2 \
/dev/md/boot-efi {"$DISK1","$DISK2"}-part"$efi_part"
wipefs -a /dev/md/boot-efi
mkfs.vfat -F 32 -s 1 -n EFI /dev/md/boot-efi
EFI_UUID="$(blkid -s UUID -o value /dev/md/boot-efi)"
# Configure mdraid and "label" filesystem for cryptswap. As we are on a server,
# we don't care about hibernation and so we can just use /dev/urandom for the
# private key and create a new LUKS container for each boot. However, this
# means that the UUID of the LUKS container will change each time. The trick
# (borrowed from https://wiki.archlinux.org/title/Dm-crypt/Swap_encryption) is
# to create a small dummy filesystem that provides a stable LABEL/UUID and then
# instruct crypttab to create the LUKS container at an offset from that
# filesystem.
mdadm --create \
--name="cryptswap" \
--metadata=0.90 --level=mirror --raid-devices=2 \
/dev/md/cryptswap {"$DISK1","$DISK2"}-part"$swap_part"
# Make the dummy filesystem only 1M in size and read-only.
wipefs -a /dev/md/cryptswap
mkfs.ext2 -L cryptswap /dev/md/cryptswap 1M
tune2fs -O read-only /dev/md/cryptswap
SWAP_DUMMY_UUID="$(blkid -s UUID -o value /dev/md/cryptswap)"
# TODO: The documentation says to use a ZFS pool for /boot, but unfortunately
# grub2 doesn't actually support ZFS pools installed in a partition (see
# <https://github.com/zfsonlinux/grub/issues/29> for more details). So we will
# just use a standard mdraid/xfs setup.
#
# grub supports LUKS-encrypted /boot but it does not support TPM
# auto-decryption (at least of the type set up by systemd-cryptenroll) at boot
# time. As we want a boot process that does not require manual intervention (we
# are running on a server) we just have to accept /boot being unencrypted.
# Maybe it's enough to trust that SecureBoot requires some kind of signing...
case "$boot_type" in
zboot-grub)
zpool create \
-o cachefile=/etc/zfs/zpool.cache \
-o ashift=12 \
-o compatibility=grub2 \
-O acltype=posixacl \
-O canmount=off -O mountpoint=/boot \
-O compression=lz4 \
-O normalization=formD \
-O devices=off \
-O relatime=on \
-O xattr=sa \
-R "$ROOT" \
zboot mirror {"$DISK1","$DISK2"}-part"$boot_part"
;;
xfs-grub)
mdadm --create \
--name="boot" \
--metadata=0.90 --level=mirror --raid-devices=2 \
/dev/md/boot {"$DISK1","$DISK2"}-part"$boot_part"
wipefs -a /dev/md/boot
mkfs.xfs -L boot /dev/md/boot
BOOT_UUID="$(blkid -s UUID -o value /dev/md/boot)"
;;
*)
bail "unhandled $boot_type"
;;
esac
# For the root we use LUKS to encrypt the underlying drives. The main issue is
# that encrypted ZFS doesn't allow us to use something
# automated-but-somewhat-secure like TPMs to store the key. We need a fully
# automated boot, which can be accomplished with systemd-cryptenroll (and the
# tpm2-tss dracut module).
declare -A zroot_luks_uuids # dmcryptname => UUID (e.g. zpool-crypt1 => abcdef-....)
for disk in "$DISK1" "$DISK2"
do
# Create a dummy password we just use for formatting, which will be
# replaced with TPM-enrolled and recovery keys managed by
# systemd-cryptenroll.
dummy_key="$(mktemp --tmpdir tmp-crypt-key.XXXXXXXX)"
# NOTE: head with a pipe doesn't work the way you'd like with pipefail.
head -c32 <(tr -dc a-zA-Z0-9 </dev/urandom) >"$dummy_key"
luks_disk="$disk-part$zroot_part"
wipefs -a "$luks_disk"
cryptsetup luksFormat --type luks2 -c aes-xts-plain64 -s 512 -h sha256 -d "$dummy_key" "$luks_disk"
# TODO: Should we bind the key to PCRs? I'm not sure how often the standard
# 7+11+14 PCR combo will break...
systemd-cryptenroll --unlock-key-file="$dummy_key" --tpm2-device=auto "$luks_disk"
# TODO: Use --unlock-tpm2-device=auto, which is in systemd v256+ (Leap 15.6 has v254).
#systemd-cryptenroll --unlock-tpm2-device=auto --wipe-slot=password --recovery-key "$luks_disk"
systemd-cryptenroll --unlock-key-file="$dummy_key" --wipe-slot=password --recovery-key "$luks_disk"
# We no longer need the dummy key.
shred -v "$dummy_key"
rm -f "$dummy_key"
# Annoyingly ${#zroot_luks_uuids[@]} doesn't work the way you'd expect for
# a newly-declared associative array. So we need to do some bullshit...
luks_idx="${zroot_luks_uuids[*]+${#zroot_luks_uuids[@]}}"
luks_idx="${luks_idx:-0}"
# Unlock the new LUKS container.
luks_name="zroot-crypt$((luks_idx+1))"
# systemd-cryptsetup was a systemd internal until v256 it seems.
/usr/lib/systemd/systemd-cryptsetup attach "$luks_name" "$luks_disk"
# We now have the luks name and uuid -- save them.
zroot_luks_uuids["$luks_name"]="$(blkid -s UUID -o value "$luks_disk")"
done
# Generate the zpool dev names used when specifying the LUKS containers. We
# could just use the nice /dev/mapper/* name (after all, crypttab will use
# UUIDs to identify the right one when decrypting) but it's better to be safe
# than sorry. Note that dmcrypt unfortunately strips the "-" character from the
# UUID, so we need to do that too.
zroot_vdev_spec=("mirror")
for luks_name in "${!zroot_luks_uuids[@]}"
do
luks_uuid="${zroot_luks_uuids[$luks_name]}"
zroot_vdev_spec+=("/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-$(echo "$luks_uuid" | tr -d -)-$luks_name")
done
# Set up the zpool for the root.
#
# Note that we use compatibility=grub2 here as well. I'm not sure why the wiki
# and other soruces say to do this, but apparently grub doesn't like the root
# pool having extra features even if grub doesn't boot from it? Weird.
zpool create \
-o cachefile=/etc/zfs/zpool.cache \
-o ashift=12 \
-o compatibility=grub2 \
-O acltype=posixacl \
-O canmount=off -O mountpoint=/ \
-O compression=lz4 \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-R "$ROOT" \
zroot "${zroot_vdev_spec[@]}"
######################################
# CONFIGURE ZFS DATASETS FOR INSTALL #
######################################
# Base /.
zfs create -o canmount=off -o mountpoint=none zroot/ROOT
zfs create -o canmount=noauto -o mountpoint=/ zroot/ROOT/suse
zfs mount zroot/ROOT/suse
# Mount base /boot.
case "$boot_type" in
zboot-grub)
zfs create -o canmount=off -o mountpoint=none zboot/BOOT
zfs create -o mountpoint=/boot zboot/BOOT/suse
;;
xfs-grub)
mkdir -p "$ROOT/boot"
mount /dev/md/boot "$ROOT/boot"
;;
*)
bail "unhandled $boot_type"
;;
esac
# Mount /boot/efi.
mkdir -p "$ROOT/boot/efi"
mount /dev/md/boot-efi "$ROOT/boot/efi"
# Verify that /boot is okay for grub-install.
grub2-probe "$ROOT"/boot
# /home & /root
zfs create zroot/home
zfs create -o mountpoint=/root zroot/home/root
chmod 0700 "$ROOT"/root
# /var
zfs create -o canmount=off zroot/var
zfs create -o canmount=off zroot/var/lib
zfs create zroot/var/log
zfs create zroot/var/spool
# /var/* that are not to be snapshotted.
zfs create -o com.sun:auto-snapshot=false zroot/var/cache
zfs create -o com.sun:auto-snapshot=false zroot/var/tmp
chmod 1777 "$ROOT"/var/tmp
# /opt
zfs create zroot/opt
# /srv
zfs create zroot/srv
# /usr/local
zfs create -o canmount=off zroot/usr
zfs create zroot/usr/local
# /var/lib/docker
zfs create -o com.sun:auto-snapshot=false zroot/var/lib/docker
# /var/lib/nfs
zfs create -o com.sun:auto-snapshot=false zroot/var/lib/nfs
# "$ROOT"/run
mkdir "$ROOT"/run
mount -t tmpfs tmpfs "$ROOT"/run
mkdir "$ROOT"/run/lock
# /tmp
zfs create -o com.sun:auto-snapshot=false zroot/tmp
chmod 1777 "$ROOT"/tmp
# Copy zpool cache.
mkdir -p "$ROOT"/etc/zfs
cp /etc/zfs/zpool.cache "$ROOT"/etc/zfs/
####################
# INSTALL OPENSUSE #
####################
# Base repos.
zypper --root "$ROOT" addrepo -f "http://download.opensuse.org/distribution/leap/\$releasever/repo/oss" leap-oss
zypper --root "$ROOT" addrepo -f "http://download.opensuse.org/distribution/leap/\$releasever/repo/non-oss" leap-nonfree
zypper --root "$ROOT" addrepo -f "http://download.opensuse.org/update/leap/\$releasever/oss" update-oss
zypper --root "$ROOT" addrepo -f "http://download.opensuse.org/update/leap/\$releasever/non-oss" update-nonfree
# filesystems repo (for zfs)
zypper --root "$ROOT" addrepo -f "obs://filesystems/\$releasever" obs-fs
# Import the repo keys.
zypper --root "$ROOT" --releasever="$OPENSUSE_LEAP_VERSION" --gpg-auto-import-keys refresh
# Install base system and core tools.
zypper --root "$ROOT" --releasever="$OPENSUSE_LEAP_VERSION" install -y -t pattern base enhanced_base
zypper --root "$ROOT" --releasever="$OPENSUSE_LEAP_VERSION" install -y zypper
# YaST...
zypper --root "$ROOT" --releasever="$OPENSUSE_LEAP_VERSION" install -y yast2
zypper --root "$ROOT" --releasever="$OPENSUSE_LEAP_VERSION" install -y -t pattern yast2_basis
# Temporary setup for the purposes of chroot.
echo navi > "$ROOT"/etc/hostname
cat >>"$ROOT"/etc/hosts <<EOF
127.0.0.1 navi
127.0.0.1 navi.dot.cyphar.com
EOF
rm -f "$ROOT"/etc/resolv.conf
cp /etc/resolv.conf "$ROOT"/etc/resolv.conf
# Set up pseudofilesystems for chroot.
mount --make-rprivate --rbind /dev "$ROOT"/dev
mount --make-rprivate --rbind /proc "$ROOT"/proc
mount --make-rprivate --rbind /sys "$ROOT"/sys
mount -t tmpfs tmpfs "$ROOT"/run
mkdir "$ROOT"/run/lock
ln -sf /proc/self/mounts "$ROOT"/etc/mtab
# Set locale.
# TODO: Figure out how to do this in a chroot. systemd doesn't like it.
#chroot "$ROOT" localectl set-locale LANG=en_US.UTF-8
# Install the kernel in a chroot.
chroot "$ROOT" zypper refresh
chroot "$ROOT" zypper install -y -f permissions iputils ca-certificates ca-certificates-mozilla pam shadow dbus-1 libutempter0 suse-module-tools util-linux
chroot "$ROOT" zypper install -y kernel-default kernel-firmware
# Install ZFS.
#
# NOTE: You will need to re-install zfs-ueficert on the target system if it is
# different to the system that you did the install on. Installing the package
# triggers MOK to enroll the signing key at the next boot, but I suspect this
# is done by setting efivars and thus the request won't move with the disks to
# the target system (which won't have the keys enrolled in the firmware).
chroot "$ROOT" zypper install -y zfs zfs-kmp-default zfs-ueficert
# We need some other things for system management.
chroot "$ROOT" zypper install -y NetworkManager openssh-server openssh-clients
# Set up LUKS auto-decryption for initramfs/boot.
chroot "$ROOT" zypper install -y cryptsetup
touch "$ROOT"/etc/crypttab
chmod 0600 "$ROOT"/etc/crypttab
# LUKS decryption for swap. This config causes us to create a new LUKS
# container for each boot (with a random key from /dev/urandom). Note that UUID
# here is actually of a small dummy filesystem just used to let us identify the
# correct device (hence offset=2048).
echo "swap UUID=$SWAP_DUMMY_UUID /dev/urandom swap,offset=2048,cipher=aes-xts-plain64,size=512,sector-size=4096" >>"$ROOT"/etc/crypttab
# Set up all of our zroot LUKS devices.
for luks_name in "${!zroot_luks_uuids[@]}"
do
luks_uuid="${zroot_luks_uuids[$luks_name]}"
echo "$luks_name UUID=$luks_uuid none luks,discard,initramfs,tpm2-device=auto" >>"$ROOT"/etc/crypttab
done
# Configure auto-mounts that are not managed by ZFS.
truncate --size=0 "$ROOT"/etc/fstab
if [[ "$boot_type" == "xfs-grub" ]]
then
cat >>"$ROOT"/etc/fstab <<-EOF
UUID=$BOOT_UUID /boot/ xfs defaults 0 2
EOF
fi
cat >>"$ROOT"/etc/fstab <<EOF
UUID=$EFI_UUID /boot/efi vfat defaults 0 2
/dev/mapper/swap none swap defaults 0 0
EOF
# Configure mdadm auto-loading.
# TODO: We probably need to set up the auto-scrubbing setup for /boot/efi...
chroot "$ROOT" zypper install -y mdadm
mkdir -p "$ROOT"/etc/mdadm/mdadm.conf.d
mdadm --detail --scan | tee "$ROOT"/etc/mdadm/mdadm.conf.d/00-root-mdraids.conf
# Install a few other things for dracut to have.
chroot "$ROOT" zypper install -y dmraid busybox lvm2 libcap-progs nvme-cli jq open-iscsi squashfs
# We need to have dracut set up for tpm2-tss.
chroot "$ROOT" zypper install -y dracut-extra tpm2.0-tools
cat >"$ROOT"/etc/dracut.conf.d/50-dmcrypt-tpm2.conf <<EOF
# This is needed for our zroot pool which has LUKS-encrypted containers that
# use TPM-sealed keys.
add_dracutmodules+=" tpm2-tss "
EOF
# Always load zfs on boot (dracut cares about this too).
cat >"$ROOT"/etc/dracut.conf.d/50-zfs.conf <<EOF
# We use ZFS. Make sure it's enabled.
add_dracutmodules+=" zfs "
EOF
echo "zfs" >"$ROOT"/etc/modules-load.d/zfs.conf
# Make sure udev creates /dev/disk/by-id/dm-name-* symlinks.
cat >>"$ROOT"/etc/udev/rules.d/99-local-crypt.rules <<'EOF'
ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"
EOF
if [[ "$boot_type" == "zboot-grub" ]]
then
# Make sure /boot is always imported.
cat >"$ROOT"/etc/systemd/system/zfs-import-zboot.service <<-EOF
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zpool import -N -o cachefile=none zboot
# Work-around to preserve zpool cache:
ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
[Install]
WantedBy=zfs-import.target
EOF
chroot "$ROOT" systemctl enable zfs-import-zboot.service
fi
# Set up initrd.
kernel_version="$(find "$ROOT"/boot/vmlinuz-* | grep -Eo '[[:digit:]]*\.[[:digit:]]*\.[[:digit:]]*-.*-default' | head -n1)"
chroot "$ROOT" kernel-install add "$kernel_version" /boot/vmlinuz-"$kernel_version"
chroot "$ROOT" dracut --force --kver "$kernel_version"
# Install GRUB (we need -extras for zfs).
chroot "$ROOT" zypper install -y grub2-x86_64-efi{,-extras} dosfstools
grub2-probe "$ROOT/boot"
cat >>"$ROOT"/etc/default/grub <<EOF
GRUB_CMDLINE_LINUX+=" root=ZFS=zroot/ROOT/suse "
EOF
chroot "$ROOT" update-bootloader
chroot "$ROOT" \
grub2-install \
--target=x86_64-efi \
--efi-directory=/boot/efi \
--bootloader-id=opensuse \
--recheck \
--no-floppy
# Register both /boot/efi halves as being valid boot drives. grub2-install does
# this for the /dev/md1000 "pseudo" disk we have (and maybe does it for the
# underlying disks?) but just to be sure configure it now.
disk_num=0
for disk in "$DISK1" "$DISK2"
do
efibootmgr -O -d "$disk"
efibootmgr -c -g -d "$disk" -p "$efi_part" -L "opensuse-mirror$((++disk_num))" -l '\EFI\opensuse\grubx64.efi'
done
# Configure root passwd.
echo "root" | passwd -s -R "$ROOT"
# Unmount everything, as we should be ready for first boot...
findmnt -R "$ROOT" -o target,fstype -nl | \
awk '$2 != "zfs" { print $1 }' | tac | xargs -I{} sudo umount -lf {}
zfs umount -a
zpool export -a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment