After booting into the ALEZ ISO, I run:
mkdir /tmp/usb && mount /dev/sdf /tmp/usb
cp /tmp/usb/vdev_id.conf /etc/zfs && udevadm trigger
zpool import -l -a
mount -t zfs nand/sys/dirty/root/default /mnt
arch-chroot /mnt /usr/bin/zsh
# Do the work
umount -a
exit
umount /mnt
reboot
umount /mnt/boot
dd if=/dev/zero of=/dev/nvme0n1p1
mkfs.fat -F32 /dev/nvme0n1p1
mount /dev/nvme0n1p1 /mnt/boot
echo "##### ***** ##### >> /mnt/etc/fstab
genfstab -U /mnt /etc/fstab
mkinitcpio -p linux
# Resinstall intel-ucode
grub-install --target=x86_64-efi --efi-directory=esp --bootloader-id=grub
grub-mkconfig -o /boot/grub/grub.cfg
https://www.funtoo.org/ZFS_Install_Guide
Do I need nvme_load=YES
zfs=rpool/ROOT/rootfs root=ZFS=rpool/ROOT/rootfs from a forum post
added nvme_core & zfs to /etc/mkinitcpio.conf modules
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=grub --recheck
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=grub --recheck grub-mkconfig -o /boot/grub/grub.cfg
That seems to have gotten farther, but cannot find the pools.
- Use the internal USB
- Use SSD I took from the other computer
- Partition and mirror the other computer's and the Boot SSD
- Some NVME tips
- Maybe something like this StackExchange answer
- Some good stuff in this forum post
- Follow the Debian Buster wiki for encrypted ZFS on boot
- A tip on whole disk encryption
2019-08-19T08:45:30 - I am going to copy over the cachefile every time I mess with it. I am also going to make a copy on the boot drive in case I can only access it there.
- Could not install grub with vdev_id.conf and vdevs. It worked without those
- made sure to copy zpool.cache to /etc/zfs. I also copied it to boot instead.
- I took away some references in grub that listed boot zfs since it's not
..... And I forgot to reinstall the kernel and microcode
A Arch Linux forum post says removing autodetect allowed him to boot a root ZFS system. I'll try it 2019-08-20T14:57:23.
I am making a post on the Arch Linux forum.
They closed it because I used ALEZ and John Ramsden's blog. I guess I could ask for help there. . . or just try Reddit or the ALEZ community.
I want to install Arch with a ZFS root on a new machine, but I cannot get it to work. I have installed Arch many times and have run it as my main desktop for a decade.
I am using the ALEZ install CD and I have followed the wiki and a guide on John ramsden's blog trying to get Arch to boot with a ZFS root. I have set all of ZFS mount points to legacy, hoping to make it easier. I have tried several boot managers, but have gotten the "closest" using grub. On one attempt the keyboard worked when I was dropped to the rootfs prompt, but not this time. I was able to figure out how to make the NVME work and the ZFS modules load, but now I am get this with no keyboard:
:: running early hook [udev] Starting version 242.84-1-arch :: running hook [udev] :: Triggering uevents... :: running hook[zfs] ERROR: device 'ZFS=nand/sys/dirty/root/default' not found. Skipping fsck. cannot open 'nand': no such pool ZFS: Importing pool nand. cannot import 'nand': no such pool available /init: line 51: die: not found cannot open Inand/sys/dirty/root/default': dataset does not exist :: running late hook [zfs] no pools available to import :: running late hook [usr] :: running cleanup hook [shutdown] :: running cleanup hook [udev] ERROR: Failed to mount the real root device. Bailing out, you are on your own. Good luck.
sh: can't access tty; job control turned off [rootfs]#
I copy over the "zpool.cache" file each time to make sure it's on the root device. I run "zpool export -a" to properly close out the zpool.
I have made a gist with relevant configuration files and command outputs. These include:
/etc/default/grub /etc/default/zfs # I haven't done much with this, but maybe it's useful? /etc/mkinitcpio.conf fdisk -l zfs list zpool status
Does anyone see a reason this might not be working? What is your best idea for what to change or check?
I want to install Arch with a ZFS root on a new machine, but I cannot get it to work.
I am using the ALEZ install image so that I have ZFS available on the install media. I have followed the Arch Linux wiki and a guide on John ramsden's blog trying to get Arch to boot with a ZFS root. I have set all of ZFS mount points to legacy, hoping to make it easier. I have also tried rEFInd and systemd-boot as boot managers, but have gotten the "closest" using grub. On one attempt with grub the keyboard worked when I was dropped to the rootfs prompt, but not this time. I was able to figure out how to make the NVME work and the ZFS modules load, but now I am get this with no keyboard:
:: running early hook [udev]
Starting version 242.84-1-arch
:: running hook [udev]
:: Triggering uevents...
:: running hook[zfs]
ERROR: device 'ZFS=nand/sys/dirty/root/default' not found. Skipping fsck.
cannot open 'nand': no such pool
ZFS: Importing pool nand.
cannot import 'nand': no such pool available
/init: line 51: die: not found
cannot open Inand/sys/dirty/root/default': dataset does not exist
:: running late hook [zfs] no pools available to import
:: running late hook [usr]
:: running cleanup hook [shutdown]
:: running cleanup hook [udev]
ERROR: Failed to mount the real root device. Bailing out, you are on your own. Good luck.
sh: can't access tty; job control turned off
[rootfs]#
I copy over the zpool.cache
file each time to make sure it's on the root device. I run zpool export -a
to properly close out the zpool.
I have made a gist with relevant configuration files and command outputs. These include:
/etc/default/grub /etc/default/zfs # I haven't done much with this, but maybe it's useful? /etc/mkinitcpio.conf fdisk -l zfs list zpool status
Does anyone see a reason this might not be working? What is your best idea for what to change or check?
I posted this.
I have built a custom archiso and it works with zfs.
I want to:
- consider my disk usage
- consider ramsden blog and incorporate virtual box storage locations
- design a file system
- get it to rebooting as soon as possible to see if it works
I want to have encrypted data sets. The options on creation will be:
For HDDs:
ashift=12
atime=off
compression=lz4
xattr=sa
For SSDs:
ashift=13
atime=off
compression=lz4
xattr=sa
I will have the following ZFS paths:
nand/root/default
rust
For nand, it will be two mirrored partitions, one on each of the SSDs. On the free space left on the larger SSD, I will have:
- 2 GB ZIL
- 25% of the drive left alone
- Remainder L2 ARC
Here are the file system paths that I want on rust:
/var/cache
/var/lib/libvirt
/var/lib/machines
/var/lib/docker
/home
I made a post on the Arch Forums. It seems I am back to where NVME is not working, but I don't know how to get past it. I have hit my specified time limit, so I am going to switch after lunch to making a ext4 root.
I am stuck with the NVME not loading.
I am going to try a few things:
- ahcpi and nvme_load=yes
- add module vmd
- I also enabled VMD in the BIOS. I had disabled it for testing. I am hopeful this might be related
2020-01-03T08:50:21: I forgot to mkinitcpio. . . I will probably have to redo this.
THIS WORKED
Holy shit, what a saga. I'm going to update my recent post. Maybe this will help someone in posterity.
I plan to install ZFS and get my extra folders mounting. I have decided to forgo ZFS root. I will make necessary backups to my encrypted ZFS partition and reap the benefits of the performance on the flash drive operating by itself. I am less concerned about data loss there as I will put everything important on the ZFS system.
I have these set up on zfs:
Filesystem Size Used Avail Use% Mounted on
dev 31G 0 31G 0% /dev
run 31G 1.4M 31G 1% /run
/dev/nvme0n1p2 938G 5.3G 885G 1% /
tmpfs 31G 215M 31G 1% /dev/shm
tmpfs 31G 0 31G 0% /sys/fs/cgroup
tmpfs 31G 244K 31G 1% /tmp
/dev/nvme0n1p1 511M 48M 464M 10% /boot
rust/home 3.5T 288G 3.2T 9% /home
rust/var/lib/docker 3.2T 256K 3.2T 1% /var/lib/docker
rust/var/lib/libvirt 3.2T 384K 3.2T 1% /var/lib/libvirt
rust/pacman-cache 3.2T 1.2G 3.2T 1% /var/lib/pacman-cache
rust/var/lib/machines 3.2T 256K 3.2T 1% /var/lib/machines
tmpfs 6.2G 12K 6.2G 1% /run/user/16431
I boot in to command line mode by default. There I load the zfs keys with sudo zfs load-key -a
. Then I mount those folders with sudo zfs mount -a
.
I can then exit and log back in to reset everything. I start awesome with startx.
I used Gnome Keyring as my ssh agent which gives me a similar function to MacOS. The password for the keys will be remembered across reboots. I can add keys with:
/usr/lib/seahorse/ssh-askpass my_key
I used PAM and xinitrc to get this working:
PAM method
Start the gnome-keyring-daemon from /etc/pam.d/login:
Add auth optional pam_gnome_keyring.so at the end of the auth section and session optional pam_gnome_keyring.so auto_start at the end of the session section.
/etc/pam.d/login
#%PAM-1.0
auth required pam_securetty.so
auth requisite pam_nologin.so
auth include system-local-login
auth optional pam_gnome_keyring.so
account include system-local-login
session include system-local-login
session optional pam_gnome_keyring.so auto_start
To use automatic unlocking, the same password for the user account and the keyring have to be set. You will still need the code in ~/.xinitrc below in order to export the environment variables required.
xinitrc method
Start the gnome-keyring-daemon from xinitrc:
~/.xinitrc
eval $(/usr/bin/gnome-keyring-daemon --start --components=pkcs11,secrets,ssh)
export SSH_AUTH_SOCK
I added my install log from the time. I don't remember a lot, but I at least wrote down some things . . I would focus mostly on the end of the saga. The formula for me was:
I hope this helps!