Install Arch Linux with ZFS root filesystem, zfs-dkms, ZFSBootMenu, Pacman Auto-snapshots, Secure Boot enabled
Go into your BIOS settings and make Secure Boot is either turned off or set to Audit Mode.
Before moving on I need to point out that there exists a Bash script that can automate the configuration and install of a ZFS root system with Arch Linux. However as convenient as it sounds the script is limited in flexibility and scope. These limitation cannot be overcome unless one has the time and compacity to edit the script to their liking. If you want to install a ZFS root system as quickly as possible and don't care about any particulars then take a good look at this github page here.
Get the latest Arch Install media prebuilt with ZFS kernel module support (or to save even more time... skip to the next section)
It's also an option to build your own Arch ISO with all the required ZFS support. However, that takes a considerable amount of time and beyond the scope of this guide. For now let's go with the below shortcut.
By the same developer hosting the prebuilt ZFS supported install media above, the purpose of this script is to skip that step entirely and build the necessary zfs modules within the Arch install enviroment.
Download the Official Arch Linux install media here:
Boot with install media and run the following command:
➜ curl -s https://raw.githubusercontent.com/eoli3n/archiso-zfs/master/init | bash
➜ setfont ter-128n
Verify DNS and internet connection is good (I recommend using a wired connection like ethernet for less hassle)
➜ ping yahoo.com
➜ reflector --country Japan --latest 5 --sort rate --save /etc/pacman.d/mirrorlist
➜ lsmod | grep zfs zfs 4218880 11 zunicode 339968 1 zfs zzstd 552960 1 zfs zlua 208896 1 zfs zavl 16384 1 zfs icp 331776 1 zfs zcommon 110592 2 zfs,icp znvpair 118784 2 zfs,zcommon spl 122880 6 zfs,icp,zzstd,znvpair,zcommon,zavl
We'll be using the UEFI boot method which requires us to create a EFI partition to launch Linux. Run the following command to identify the storage devices and partition info:
➜ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT mmcblk0 179:0 0 29.1G 0 disk nvme0n1 259:0 0 465.8G 0 disk
*Note: From a security standpoint it is possible to have the EFI (boot) partition on a entirely different storage device than the actual OS root filesystem, eg., mmcblk0 or the Micro SD card. Skip below to "As mentioned above" if you're interested in this setup.
The M.2 device will be our target install storage device (nvme0n1). Save the storage path into a variable to make things easy:
➜ DISK=/dev/nvme0n1
You can use any partitioning tool... In this example we'll use sgdisk one-liners to create the partition scheme we need.
➜ sgdisk --zap-all $DISK ➜ sgdisk -n1:1M:+256M -t1:EF00 $DISK ➜ sgdisk -n2:0:0 -t2:BF00 $DISK
Note: If you're using a Libvirt \ Qemu virtual machine the above commands may fail with an obscure error. If this is the case try using gdisk in interactive mode to manually create the partition scheme. Also, don't forget to label the partitions correctly: EFI Boot Partition=ef00, Linux Install Partition=bf00
➜ gdisk /dev/vda
Now lets verify the partitions are good before moving on:
➜ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS mmcblk0 179:0 0 29.1G 0 disk nvme0n1 259:0 0 465.8G 0 disk ├─nvme0n1p1 259:1 0 256M 0 part └─nvme0n1p2 259:2 0 465.5G 0 part
As mentioned above, in this section we're going to create the EFI partition on a seperate storage device or an SD Card for additional security. The mmcblk0 used here is a SanDisk Extreme Pro (Spec: A1, U3, V30, 32Gb) Micro SD card fitted into the laptop's media card slot reader... but you don't need to use anything fancy (it can even be a USB flash drive). Just be sure that the storage device or media used is something reliable (it is housing our EFI files after all). It's also a good idea to create a backup image of this entire storage device in case of lost, theft, corruption, swollowed, eaten, or device failure. Check the Post installation Tips section later for further guidance.
➜ DISK0=/dev/mmcblk0 ➜ DISK1=/dev/nvme0n1 ➜ sgdisk --zap-all $DISK0 ➜ sgdisk --zap-all $DISK1 ➜ sgdisk -n1:1M:+256M -t1:EF00 $DISK0 ➜ sgdisk -n2:0:0 -t2:BF00 $DISK1 ➜ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS mmcblk0 179:0 0 29.7G 0 disk └─mmcblk0p1 179:1 0 512M 0 part nvme0n1 259:0 0 238.5G 0 disk └─nvme0n1p1 259:1 0 238.5G 0 part
Create the EFI (boot) filesystem on the first partition:
➜ mkfs.vfat -v -F 32 -n EFI /dev/nvme0n1p1
Or if you're going the seperate storage \ removeable device route:
➜ mkfs.vfat -v -F 32 -n EFI /dev/mmcblk0p1
Create our zroot pool.
Note: The lowercase and capital o's matter in this command. If you're not using an actual sata SSD drive you can set off the autotrim option (I'm using a M.2\NVME so I think this should be left off), and if you're using a virtual enviroment you can probably omit the compression option as well.
Note: At this point you may want to encrypt the rootfs and let zfs prompt you for a passphrase to unlock the zroot pool (Especially if you decided to use a seperate boot device). I'll include these commands in the near future. See this link for further guidance.
➜ zpool create -f \ -o ashift=12 \ -o autotrim=off \ -O devices=off \ -O relatime=on \ -O xattr=sa \ -O acltype=posixacl \ -O dnodesize=legacy \ -O normalization=formD \ -O compression=lz4 \ -O canmount=off \ -O mountpoint=none \ -R /mnt zroot /dev/nvme0n1p2
Again, if you're going the seperate storage \ removeable device route replace that last line with:
-R /mnt zroot /dev/nvme0n1p1
Verify the pool:
➜ zpool status
Create filesystem mountpoints and then import \ export test:
➜ zfs create zroot/ROOT ➜ zfs create -o canmount=noauto -o mountpoint=/ zroot/ROOT/arch ➜ zfs create -o mountpoint=/home zroot/home ➜ zpool export zroot ➜ zpool import -d /dev/nvme0n1p2 -R /mnt zroot -N
Tip: If you're planning to run Docker, Libvirt or something else that could get intensive, it might be prudent to add more mountpoints like so:
➜ zfs create -o mountpoint=/var/log zroot/var/log ➜ zfs create -o mountpoint=/var/lib/docker zroot/var/lib/docker ➜ zfs create -o mountpoint=/var/lib/libvirt zroot/var/lib/libvirt
Mount our mount points:
➜ zfs mount zroot/ROOT/arch ➜ zfs mount -a ➜ mkdir -p /mnt/{etc/zfs,boot/efi} ➜ mount /dev/nvme0n1p1 /mnt/boot/efi
Or if you're going the other setup, replace the last command above with this one here:
➜ mount /dev/mmcblk0p1 /mnt/boot/efi
Check if zfs mounted successfully:
➜ mount | grep mnt zroot/ROOT/arch on /mnt type zfs (rw,relatime,xattr,posixacl) zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)
If you went the other setup additionally it should show the SD Card mounted
zroot/ROOT/arch on /mnt type zfs (rw,relatime,xattr,posixacl) zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl) zroot/boot on /mnt/boot/efi type vfat (rw,relatime,fmask=0022,dmask0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
Make sure df shows all the mount points (including any additional ones you may have created earlier... log, docker, libvirt etc.)
➜ df -k zroot/ROOT/arch... /mnt zroot/home... /mnt/home /dev/nvme0n1p1... /boot/efi
Or if booting a removeable device (SD Card)...
zroot/ROOT/arch... /mnt zroot/home... /mnt/home /dev/mmcblk0p1... /boot/efi
If all is good, move on to set bootfs and create zfs cache file:
➜ zpool set bootfs=zroot/ROOT/arch zroot ➜ zpool set cachefile=/etc/zfs/zpool.cache zroot ➜ cp -v /etc/zfs/zpool.cache /mnt/etc/zfs
Install packages with pacstrap:
➜ pacman -Syy ➜ pacstrap /mnt base base-devel linux linux-headers linux-firmware intel-ucode amd-ucode fwupd udisks2 sbctl efitools dkms efibootmgr man-db man-pages git rust cargo nano
Generate and configure the filesystem table file.
You need to comment out all the entries (by adding # to the beginning of the line) except the one entry containing /boot/efi.
➜ genfstab -U -p /mnt >> /mnt/etc/fstab ➜ nano /mnt/etc/fstab
Copy over the dns settings to the new system:
➜ cp -v /etc/resolv.conf /mnt/etc
As a long time Arch User, editing the compile flags is a good idea as you'll be compiling some of the best software available from the AUR.
➜ arch-chroot /mnt ➜ nano /etc/makepkg.conf
In /etc/makepkg.conf on the "CFLAGS" line, remove "-march" and "-mtune" add replace with "-march=native". Scroll down to the line with MAKEFLAGS="-j2" and change that to MAKEFLAGS="-j$(nproc)". Near the bottom of the file look for the compression options, add "--threads=0" to the COMPRESSZST and COMPRESSXZ commands.
Consult the Arch Linux wiki for additional guidance.
➜ nano /etc/makepkg.conf ... CFLAGS="-march=native -O2 -pipe -fno-plt" ... RUSTFLAGS="-Copt-level=2 -Ctarget-cpu=native -Cforce-frame-pointers=yes" ... MAKEFLAGS="-j$(nproc)" ... COMPRESSZST=(zstd -c -z -q --threads=0 -) COMPRESSXZ=(xz -c -z --threads=0 -) ...
➜ useradd -m username ➜ passwd username ➜ usermod -aG users,sys,adm,log,scanner,power,rfkill,video,storage,optical,lp,audio,wheel username ➜ id username
Add "%wheel ALL=(ALL) ALL" without quotes using nano:
➜ nano /etc/sudoers.d/username
For pacman wrappers I've used Yay for a while but after trying Paru out I just never went back. I encourage you to use any wrapper you're comfortable or familiar with. In this case we'll go with Paru. If you've never used Paru before check out this cool cheat sheet here:
➜ su username ➜ sudo pacman -Syy ➜ git clone https://aur.archlinux.org/paru.git && cd paru ➜ makepkg -si
➜ cd ➜ paru -S zfs-dkms
This is about the bare minimal packages needed. You can even omit openssh if you don't plan on doing any remote management and terminus-font. Note: You can use dhcpcd in place of networkmanager for a less system resource alternative.
➜ paru -S networkmanager reflector openssh terminus-font
In addition, for a more complete Desktop experience I recommend install the following packages.
Note: I placed a star next to packages that will require some manual intervention to get working. Look up the package in the Archlinux Wiki for guidance.
xdg-user-dirs xdg-utils bash-completion tmux inetutils net-tools dnsutils avahi* nss-mdns ntp* firewalld* apparmor* tlp* acpi_call acpid* bluez* bluez-utils gpm* cups* alsa-utils pipewire pipewire-alsa pipewire-pulse pipewire-jack sof-firmware smartmontools* lm_sensors* curl wget lsftp rsync dmidecode lsof htop fcron zsh* grml-zsh-config fd fzf* exfatprogs ntfs-3g dosfstools neovim
➜ systemctl enable zfs-import-cache ➜ systemctl enable zfs-import.target ➜ systemctl enable zfs-mount ➜ systemctl enable zfs-share ➜ systemctl enable zfs-zed ➜ systemctl enable zfs.target ➜ systemctl enable NetworkManager ➜ systemctl enable reflector.timer
➜ sudo zgenhostid $(hostid) ➜ hostid
➜ sudo ln -sf /usr/share/zoneinfo/Pacific/Guam /etc/localtime ➜ hwclock --systohc
Generate your locales, edit /etc/locale.gen and uncomment all locales you need, for example en_US.UTF-8.
➜ nano /etc/locale.gen ➜ locale-gen
Set your hostname by writing it to /etc/hostname:
➜ echo "hostname" > /etc/hostname
Then edit /etc/hosts, where is your previously chosen hostname:
➜ echo "127.0.0.1 localhost" >> /etc/hosts ➜ echo "::1 localhost" >> /etc/hosts ➜ echo "127.0.0.1 hostname.localdomain hostname" >> /etc/hosts
Edit the reflector configuration file by adding the correct country and make sure you have "--sort rate".
➜ nano /etc/xdg/reflector/reflector.conf
This will make your console look cleaner... However, I urge you to customize these settings to your liking.
➜ sudo nano /etc/vconsole.conf FONT=ter-128n
Note: After performing the commands below to download the zfsbootmenu EFI file, you may have to go into your BIOS (Boot) settings and specifically select this file to boot. Basically telling your BIOS, "Hey this is my boot loader. Use this file to help load the OS!".
➜ mkdir -p /boot/efi/EFI/zbm ➜ wget https://get.zfsbootmenu.org/latest.EFI -O /boot/efi/EFI/zbm/zfsbootmenu.EFI ➜ efibootmgr --disk /dev/nvme0n1 --part 1 --create --label "ZFSBootMenu" --loader '\EFI\zbm\zfsbootmenu.EFI' --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=zroot zbm.import_policy=hostid" --verbose
If you're booting using an external storage device route use this command instead:
➜ efibootmgr --disk /dev/mmcblk0 --part 1 --create --label "ZFSBootMenu" --loader '\EFI\zbm\zfsbootmenu.EFI' --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=zroot zbm.import_policy=hostid" --verbose
➜ zfs set org.zfsbootmenu:commandline="noresume init_on_alloc=0 rw spl.spl_hostid=$(hostid)" zroot/ROOT
Make sure the HOOKS line are like the following and edit the COMPRESSION_OPTIONS near the bottom of the file.
"HOOKS=(base udev autodetect modconf block keyboard zfs filesystem)"
"COMPRESSION_OPTIONS=(-c -z -q --threads=0 -)"
➜ nano /mnt/etc/mkinitcpio.conf ➜ mkinitcpio -P
Note: There are a lot of settings you can change in this file that can help further improve functionality, performance, and better tailor Arch Linux to your hardware. As usual, consult the Arch Linux wiki for guidance.
➜ passwd
➜ exit
➜ umount /mnt/boot/efi ➜ zfs umount -a ➜ zpool export zroot ➜ reboot
This part is completely optional but I believe taking the opportunity now to update your BIOS version in this section could save you some time later... And you don't have to use this method if you prefer another way.
➜ sudo fwupdmgr get-devices Dell Inc. Latitude 7400 │ ├─Cannon Point-LP LPC Controller: │ Device ID: 71b31258b13a4b2793e529856a190f8fb02ad151 │ Current version: 30 │ Vendor: Intel Corporation (PCI:0x8086) │ GUIDs: e9af651b-e3d5-55ec-b0f2-77c927119317 ← PCI\VEN_8086&DEV_9D84 │ 2b36d90a-fb29-585b-807c-ad4836cb3256 ← PCI\VEN_8086&DEV_9D84&REV_30 │ a876086d-1455-512b-9346-8d2bb23bd445 ← PCI\VEN_8086&DEV_9D84&SUBSYS_102808E1 │ 88e1319f-1116-5e38-8354-9f64be2ea73d ← PCI\VEN_8086&DEV_9D84&SUBSYS_102808E1&REV_30 │ 2d80f689-0b5e-5c4b-b6df-bd767f6e9f05 ← INTEL_SPI_CHIPSET\ID_PCH300 │ Device Flags: • Internal device │ • Cryptographic hash verification is available ...
➜ sudo fwupdmgr refresh Updating lvfs Downloading… [************************************** ] Successfully downloaded new metadata: 1 local device supported
➜ sudo fwupdmgr get-updates Devices with no available firmware updates: • SSDPEKKF256G8 NVMe INTEL 256GB • TPM • Thunderbolt host controller • UEFI dbx ...
➜ sudo fwupdmgr update Devices with no available firmware updates: • SSDPEKKF256G8 NVMe INTEL 256GB • TPM • Thunderbolt host controller • UEFI dbx ╔══════════════════════════════════════════════════════════════════════════════╗ ║ Upgrade System Firmware from 1.26.0 to 1.41.1? ║ ╠══════════════════════════════════════════════════════════════════════════════╣ ║ This stable release fixes the following issues: ║ ║ ║ ║ • This release contains security updates as disclosed in the Dell ║ ║ Security Advisory. ║ ║ ║ ║ Latitude 7400 must remain plugged into a power source for the duration of ║ ║ the update to avoid damage. ║ ╚══════════════════════════════════════════════════════════════════════════════╝ Perform operation? [Y|n]:Y Waiting… [***************************************] Less than one minute remaining… Successfully installed firmware Do not turn off your computer or remove the AC adapter while the update is in progress. An update requires a reboot to complete. Restart now? [y|N]: Y
Note: Getting fwupdmgr to update your BIOS version seemlessly without first needing to disable Secure Boot requires some additional setup and configurations which I'm still investigating. I believe you'll need to install shim-signed from the AUR. Once I can confirm I'm able to do it successfully myself I'll update this guide with the instructions. Please let me know if you've already got this setup and fully working.
Moving on... check for TMP requirements and the current status of Secure Boot
➜ bootctl System: Firmware: UEFI 2.80 (American Megatrends 5.26) Firmware Arch: x64 Secure Boot: disabled (audit) TPM2 Support: yes Measured UKI: yes Boot into FW: supported ... ➜ sbctl status Installed: ✓ sbctl is installed Owner GUID: bd4a58ba-6a8d-4ee3-866e-1e9f1eb7e690 Setup Mode: ✗ Enabled Secure Boot: ✗ Disabled Vendor Keys: none
Running the commands above you want to verify you have TMP2 checked and "Secure Boot:" showing "disabled (audit)". If it's showing something different than the output displayed here, you may need to reboot into computer's BIOS settings and verify you have the TPM chip enabled and Safe Boot is set to "Audit" or "Setup" mode. If you don't have those options, you should see another option to delete all keys. In my Dell Intel Laptop, I also needed to enable "Trusted Execution", "Intel Virtualization Technology", and "VT for Direct I/O" to get everything working.
Once you have that configured correctly and you have the same command output (or something similar) as shown above we can move on...
➜ sudo sbctl create-keys Created Owner UUID a9fbbdb7-a05f-48d5-b63a-08c5df45ee70 Creating secure boot keys...✔ Secure boot keys created!
➜ sudo sbctl enroll-keys Enrolling keys to EFI variables...✔ Enrolled keys to the EFI variables!
Or If you had to clear out your keys in the BIOS settings:
➜ sudo sbctl enroll-keys --microsoft Enrolling keys to EFI variables...✔ Enrolled keys to the EFI variables!
➜ sudo sbctl verify Verifying file database and EFI images in /boot/efi... ✗ /boot/vmlinuz-linux is not signed ✗ /boot/vmlinuz-linux-vfio is not signed ✗ /boot/efi/EFI/zbm/zfsbootmenu.EFI is not signed ✗ /boot/efi/EFI/arch/fwupdx64.efi is not signed
Sign all the EFI files and any additional kernels you might have installed
➜ sudo sbctl sign -s /boot/efi/EFI/zbm/zfsbootmenu.EFI ➜ sudo sbctl sign -s /boot/efi/EFI/arch/fwupdx64.efi ➜ sudo sbctl sign -s /boot/vmlinuz-linux ➜ sudo sbctl sign -s /boot/vmlinuz-linux-vfio
➜ sudo sbctl verify Verifying file database and EFI images in /boot/efi... ✓ /boot/vmlinuz-linux is signed ✓ /boot/vmlinuz-linux-vfio is signed ✓ /boot/efi/EFI/zbm/zfsbootmenu.EFI is signed ✓ /boot/efi/EFI/arch/fwupdx64.efi is signed
➜ sudo sbctl list-files /boot/efi/EFI/arch/fwupdx64.efi Signed: ✓ Signed /boot/efi/EFI/zbm/zfsbootmenu.EFI Signed: ✓ Signed /boot/vmlinuz-linux Signed: ✓ Signed /boot/vmlinuz-linux-vfio Signed: ✓ Signed
Reboot and turn Secure Boot on or from Audit \ Setup to "Deploy" mode (if available) in the BIOS. If your system boots up successfully you should be good to go. You can run this command again to verify Secure Boot is fully enabled.
➜ sbctl status Installed: ✓ sbctl is installed Owner GUID: bd4a58ba-6a8d-4ee3-866e-1e9f1eb7e690 Setup Mode: ✓ Disabled Secure Boot: ✓ Enabled Vendor Keys: none
Whenever you update the kernel or install a new kernel, a hook should automatically sign the boot images & EFI files.
If you're having issues getting Safe Boot fully working then scroll down to the References & Software section and take a look at "Setting up Arch + LUKS + BTRFS + systemd-boot + apparmor + Secure Boot + TPM 2.0" link for further troubleshooting.
If you made it this far into this guide and you're able to boot into a fully operational rootzfs system then I commend you - as that should be a testament to your focus and tenacity. If the system did not boot up correctly, (I still commend you!) but don't despair - just retrace your steps carefully and look into the troubleshooting section at the bottom of this guide for any clues.
Setting up a Swap solution using zram
I took these instructions straight from the Arch Linux wiki here (https://wiki.archlinux.org/title/Zram). We'll be going with the udev rule to keep everything straight forward and persistent.
Verify you're able to load the zram module with modprobe:
➜ sudo modprobe zram ➜ lsmod | grep zram zram 61440 0 842_decompress 16384 1 zram 842_compress 24576 1 zram lz4hc_compress 20480 1 zram lz4_compress 24576 1 zram
Load the zram module at boot:
➜ sudo nano /etc/modules-load.d/zram.conf zram
Create a udev rule and decide how much ram zram can use:
➜ nano /etc/udev/rules.d/99-zram.rules ACTION=="add", KERNEL=="zram0", ATTR{initstate}=="0", ATTR{comp_algorithm}="zstd", ATTR{disksize}="2G", TAG+="systemd"
Add /dev/zram to your fstab with a higher than default priority and the x-systemd.makefs option:
➜ sudo nano /etc/fstab /dev/zram0 none swap defaults,discard,pri=100,x-systemd.makefs 0 0
Optimize swap on ram settings:
➜ sudo nano /etc/sysctl.d/99-vm-zram-parameters.conf vm.swappiness = 180 vm.watermark_boost_factor = 0 vm.watermark_scale_factor = 125 vm.page-cluster = 0
Reboot the system and verify everything works:
➜ lsmod | grep zram zram 61440 0 842_decompress 16384 1 zram 842_compress 24576 1 zram lz4hc_compress 20480 1 zram lz4_compress 24576 1 zram ➜ swapon NAME TYPE SIZE USED PRIO /dev/zram0 partition 2G 0B 100 ➜ zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 zstd 2G 4K 59B 4K 8 [SWAP]
Create a backup of the external boot device
As mentioned earlier in this guide, its a good idea to backup your SD Card or any removeable storage device housing our EFI files.
Identify the boot mount point and temporarily unmount it:
➜ mount | grep boot /dev/mmcblk0p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro) ➜ sudo umount /boot/efi
Warning: While /boot/efi is unmounted, do not attempt any software updates as it could break the OS!
Use dd and bzip2 to image the data into a compressed file in one go:
➜ sudo dd if=/dev/mmcblk0 | bzip2 -c > ~/sdcard-backup-10-13-25.img.bz2 62333952+0 records in 62333952+0 records out 31914983424 bytes (32 GB, 30 GiB) copied, 461.963 s, 69.1 MB/s sudo dd if=/dev/mmcblk0 6.71s user 38.70s system 9% cpu 7:47.53 total bzip2 -c > ~/sdcard-backup-10-13-25.img.bz2 267.58s user 4.18s system 58% cpu 7:47.57 total
Check the compressed file for any issues:
➜ file ~/sdcard-backup-10-13-25.img.bz2 sdcard-backup-10-13-25.img.bz2: bzip2 compressed data, block size = 900k ➜ bzip2 -t ~/sdcard-backup-10-13-25.img.bz2 bzip2 -t sdcard-backup-10-13-25.img.bz2 71.23s user 0.05s system 99% cpu 1:11.28 total
Operation complete, let's mount our device back like it was before:
➜ sudo mount /dev/mmcblk0p1 /boot/efi ➜ mount | grep boot /dev/mmcblk0p1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
Use this command to restore the backup file onto the same (or a new) storage device:
➜ sudo bzip2 -dc ~/sdcard-backup-10-13-25.img.bz2 | dd of=/dev/mmcblk0
Auto snaphots with every Pacman transaction
One of the main features of the ZFS filesystem is its ability to take filesystem snapshots. This feature is quite invalueable especially during major Operating System upgrades and software updates. However initiating a manual ZFS snapshot everytime something changes on the system can be very tedious. Using a Pacman hook is a very effective way to solve this issue.
- https://aur.archlinux.org/packages/pacman-zfs-hook
- https://github.com/RileyInkTheCat/Pacman-ZFS-Hook
The hook calls on the "zfs-snap-pac" script to create a snapshot whenever there is a pacman transaction. You can customize this script to your liking. For example we'll modify the date command within the GetSnapshotName() function so that it names snapshots in a more meaningful manner. After installing the script from AUR, edit the hook file like so:
➜ sudo nano /usr/share/libalpm/scripts/zfs-snap-pac
... GetSnapshotName() { local time=$(date +%s) snapshotName="$snapshotName$time" } ...
Example output of default snapshot naming scheme:
➜ date +%s 1708100748
New human friendly naming scheme:
➜ date +%F-%R 2024-02-17-02:25
Creating snapshots everytime there's a Pacman transaction can quickly take up valuable space if old snapshots are never deleted. Use another hook to automate this task and avoid future potential storage issues.
- https://aur.archlinux.org/packages/zfs-prune-snapshots
- https://github.com/bahamas10/zfs-prune-snapshots
Use the hook file below to automatically call the zfs-prune-snapshots script after every Pacman transaction has completed. The arguement "2w" will instruct the script to delete any ZFS snapshot matching 2 weeks or older.
➜ cat /usr/share/libalpm/hooks/01-zfs-prune-pac.hook [Trigger] Operation = Upgrade Operation = Remove Type = Package Target = * [Action] Description = Pruning ZFS snapshots... When = PostTransaction Exec = /usr/bin/zfs-prune-snapshots 2w
For recovery situations with a broken pacman and outdated glibc
This is an issue I've seen when you don't update Arch Linux for a long time (months, years, etc.). To fix this kind of issue it's important you perfom this step now (before pacman breaks and you're sol).
Install pacman-static from AUR (has dependencies compiled into one binary):
➜ paru -S pacman-static
Create a Netboot Arch Linux \ Online Recovery System
The idea behind this recovery system is to have access to an "Always Working" and "Up to Date" Arch Linux system, where we can easily implement recovery efforts anytime. This perfect recovery system needs to be versatile and not configuration dependant with the only requirement being is a wired internet connection.
First you'll need to download a copy of the Arch Linux ipxe x86_64 UEFI executable:
➜ wget https://archlinux.org/static/netboot/ipxe-arch.efi
Sign and copy it to the boot \ efi partition:
➜ sudo mkdir /boot/efi/EFI/netboot ➜ sudo cp -v ./ipxe-arch.efi /boot/efi/EFI/netboot './ipxe-arch.efi' -> '/boot/efi/EFI/netboot/ipxe-arch.efi' ➜ sudo sbctl sign -s /boot/efi/EFI/netboot/ipxe-arch.efi ✓ Signed /boot/efi/EFI/netboot/ipxe-arch.efi
Now you should be able to Netboot (aka PXE boot) Arch Linux Installer from your EFI shell. If your BIOS supports it, look in the boot settings and add a boot entry containing the path to ipxe-arch.efi. That way when you boot your computer, press the appropiate F key to access your BIOS's boot menu and take it from there.
Launching from your EFI shell:
# Enter your EFI partition FS0 or FS1 FS1: cd EFI\netboot # Start the efi file ipxe.efi
Note: Before Netbooting Arch Linux, be sure to temporarily disable Secure Boot or set to "Audit" mode in your BIOS settings. Not doing so will cause the following error:
Could not select: Exec Format error (http://ipxe.org/2e008081)
Once you've booted into the Arch Linux Netboot Menu, select "Release:" and choose a release date that's about a month or two old. Choosing a date that is too soon will not work because the zfs modules haven't been built yet for that kernel version.
After that, select "Choose a mirror" and find the closest Country \ Server to your location.
Finally, select "Boot Arch Linux", to let it fetch the Linux Image (with release date you selected) boot.
Once Arch Linux is booted up and you're in the shell, enter the following command to start building the zfs kernel modules:
➜ curl -s https://raw.githubusercontent.com/eoli3n/archiso-zfs/master/init | bash
The script will start building the modules and may spit out the following error in which case you can safely ignore.
>Install zfs-dkms error: command failed to execute correctly
However, if you get this error you unfortunately will have to try booting again and selecting an older release date.
>Install zfs-dkms modprobe: FATAL: Module zfs not found in directory /lib/modules/...
Run the following command to verify the zfs modules are loaded:
➜ lsmod | grep zfs zfs 6602751 0 spl 159744 1 zfs
If the above command checks out, continue down and see section System Rescue \ Troubleshooting to proceed.
To fix the "WARNING: Possible missing firmware" messages everytime initramfs is regenerated:
➜ paru -S mkinitcpio-firmware
how to fix missing libcrypto.so.1.1?
/var stays busy at shutdown due to journald #867
Arch Linux, Aur error - FAILED unknown public key
zfs-dkms depends on a specific version of the zfs-utils, and zfs-utils depend on a specific version of zfs-dkms, which completely prevents me from updating them
Use this fix script by ghost:
#!/bin/zsh paru -Sy g='/Version/{print $3}' d1=$(paru -Qi zfs-dkms | gawk "$g") d2=$(paru -Si zfs-dkms | gawk "$g") u1=$(paru -Qi zfs-utils | gawk "$g") u2=$(paru -Si zfs-utils | gawk "$g") if [[ $d1 == $d2 || $u1 == $u2 ]]; then echo "zfs is up to date" exit 0 fi paru -Sy zfs-dkms zfs-utils \ --assume-installed zfs-dkms=$d1 --assume-installed zfs-dkms=$d2 \ --assume-installed zfs-utils=$u1 --assume-installed zfs-utils=$u2
Mounting ZFS using live boot media
If the system is broken and you need to perform troubleshooting tasks to recover, it's not difficult to mount zroot using live media (Especially, if you've followed the steps above and created a Arch Netboot \ Live Recovery Eniroment). Follow the commands below and substitute your zfs pool names \ partitions where needed.
➜ lsblk ➜ zpool import -f -N -R /mnt zroot ➜ zfs list ➜ zfs mount zroot/ROOT/arch_mpv-libplacebo2_NEW ➜ ls /mnt ➜ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sr0 11:0 1 883.3M 0 rom mmcblk0 179:0 0 29.7G 0 disk └─mmcblk0p1 179:1 0 512M 0 part /boot/efi zram0 253:0 0 2G 0 disk [SWAP] nvme0n1 259:0 0 238.5G 0 disk └─nvme0n1p1 259:1 0 238.5G 0 part ➜ mount /dev/mmcblk0p1 /mnt/boot/efi ➜ zfs mount zroot/home ➜ mount | grep mnt ➜ arch-chroot /mnt
If you're mounting using a different distro eg: Alpine Linux:
➜ zpool import -f zroot ➜ mount -t zfs zroot/ROOT/alpine /mnt ➜ mount -t zfs zroot/home /mnt/home ➜ mount /dev/vda1 /mnt/boot/efi ➜ mount | grep mnt ➜ chroot /mnt /usr/bin/env sh
Reset the zfs pool and umount
➜ zpool export -f zroot ➜ zfs umount -a
During the writing of this lengthy Setup Guide, I've spent hours in referencing quite a number of documents and online sources. Without these sources and the people who dedicated their time to create and share information, my work into writing this Guide would not have been possible. I will list all the sources I found valuable into writing this guide not to only give credit but also in hopes that you may find these sources useful as well.
2022: Arch Linux Root on ZFS from Scratch Tutorial
Guide: Install Arch Linux on an encrypted zpool with ZFSBootMenu as a bootloader
Debian Bullseye installation with ESP on the zpool disk
Setting up Arch + LUKS + BTRFS + systemd-boot + apparmor + Secure Boot + TPM 2.0 - A long, nightmarish journey, now simplified
sbctl: Key creation and enrollment
Configure systemd ZFS mounts
The Archzfs unofficial user repository offers multiple ways to install the ZFS kernel module.
Arch Linux pacman hooks
Paru: Feature packed AUR helper
Please feel free to leave any helpful comments or suggestions.
You could also takeover secure boot by deleting keys (including PK) from within UEFI, creating and enrolling keys via sbctl, adding hooks to it to sign kernel, add hooks to dkms to sign the module using your sbctl-managed db.key.. Oh and use hook zbm-sign.pl from zbm github repo to sign zbm as well, in which case you'd have both secure boot enabled, and this whole setup.
Or, y'know, at least edit the title to reflect that secure boot is not enabled.