Skip to content

Instantly share code, notes, and snippets.

@Birch-san
Last active February 20, 2025 00:12
Show Gist options
  • Save Birch-san/ec0c765fc497ce0c36a4c3cbf2d2ff22 to your computer and use it in GitHub Desktop.
Save Birch-san/ec0c765fc497ce0c36a4c3cbf2d2ff22 to your computer and use it in GitHub Desktop.
ZFS home encryption Ubuntu 22.10

I started with a basic Ubuntu 22.10 installation, where I chose in the installer to use ZFS as my volume manager.
I wanted to encrypt my home folder.

I followed the article (and comments, including Christoph Hagemann's) from:
https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/

To achieve:

  • Home directory (a ZFS rpool mount) is encrypted
  • You are only prompted for password if you are trying to login to that user
    • So PC can boot fine to login screen without intervention
  • Password prompt authenticates you as the user and decrypts the home folder's rpool
  • SSH users get the same experience as physical users
    • You can power on the PC, then SSH in
  • Once rpool is unlocked: subsequent SSH login can use key exchange instead of password
  • Once all sessions log out: rpool is encrypted and unmounted again
@Birch-san
Copy link
Author

Birch-san commented Dec 12, 2023

so your zpool disappeared? after a reboot for example.
this happened to me with nvme1.

doesn't show up in zfs list or zpool list:

zfs list
zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool  1.88G   262M  1.62G        -         -     0%    13%  1.00x    ONLINE  -
rpool  3.62T  2.03T  1.59T        -         -     3%    56%  1.00x    ONLINE  -
sdb    7.27T   380G  6.89T        -         -     0%     5%  1.00x    ONLINE  -
# where is nvme1

but still shows up in fdisk?

sudo fdisk -l
Disk /dev/nvme1n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: CT4000P3PSSD8                           
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D6B03CE8-EDFA-43A6-B9F5-904B0497D7D9

Device           Start        End    Sectors  Size Type
/dev/nvme1n1p1    2048    1050623    1048576  512M EFI System
/dev/nvme1n1p2 1050624    5244927    4194304    2G Linux swap
/dev/nvme1n1p3 5244928    9439231    4194304    2G Solaris boot
/dev/nvme1n1p4 9439232 7814035455 7804596224  3.6T Solaris root

then try:

cd /dev/disk/by-id

looks like it's still in /dev/disk/by-id:

birch@tree-diagram:/dev/disk/by-id $ ls | grep nvme
nvme-CT4000P3PSSD8_2242E67C5015
nvme-CT4000P3PSSD8_2242E67C5015_1
nvme-CT4000P3PSSD8_2242E67C5015_1-part1
nvme-CT4000P3PSSD8_2242E67C5015_1-part2
nvme-CT4000P3PSSD8_2242E67C5015_1-part3
nvme-CT4000P3PSSD8_2242E67C5015_1-part4
nvme-CT4000P3PSSD8_2242E67C5015-part1
nvme-CT4000P3PSSD8_2242E67C5015-part2
nvme-CT4000P3PSSD8_2242E67C5015-part3
nvme-CT4000P3PSSD8_2242E67C5015-part4
nvme-CT4000P3PSSD8_2325E6E60AC2
nvme-CT4000P3PSSD8_2325E6E60AC2_1
nvme-CT4000P3PSSD8_2325E6E60AC2_1-part1
nvme-CT4000P3PSSD8_2325E6E60AC2_1-part9
nvme-CT4000P3PSSD8_2325E6E60AC2-part1
nvme-CT4000P3PSSD8_2325E6E60AC2-part9
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part1
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part2
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part3
nvme-nvme.c0a9-323234324536374335303135-43543430303050335053534438-00000001-part4
nvme-nvme.c0a9-323332354536453630414332-43543430303050335053534438-00000001
nvme-nvme.c0a9-323332354536453630414332-43543430303050335053534438-00000001-part1
nvme-nvme.c0a9-323332354536453630414332-43543430303050335053534438-00000001-part9

I think zpool import lists what's importable?

birch@tree-diagram:/dev/disk/by-id $ sudo zpool import
   pool: nvme1
     id: 5687133204337010723
  state: ONLINE
status: Some supported features are not enabled on the pool.
    (Note that they may be intentionally disabled if the
    'compatibility' property is set.)
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    nvme1                              ONLINE
      nvme-CT4000P3PSSD8_2325E6E60AC2  ONLINE

okay, let's import it:

zpool import -a

that worked:

zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool  1.88G   262M  1.62G        -         -     0%    13%  1.00x    ONLINE  -
nvme1  3.62T  3.23T   405G        -         -     7%    89%  1.00x    ONLINE  -
rpool  3.62T  2.03T  1.59T        -         -     3%    56%  1.00x    ONLINE  -
sdb    7.27T   380G  6.89T        -         -     0%     5%  1.00x    ONLINE  -

start a repair, for good measure:

sudo zpool scrub nvme1

check zpool status:

zpool status nvme1
  pool: nvme1
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub in progress since Tue Dec 12 20:05:13 2023
	1.36T / 3.23T scanned at 34.7G/s, 0B / 3.23T issued
	0B repaired, 0.00% done, no estimated completion time
config:

	NAME                               STATE     READ WRITE CKSUM
	nvme1                              ONLINE       0     0     0
	  nvme-CT4000P3PSSD8_2325E6E60AC2  ONLINE       0     0     0

errors: No known data errors

okay, decrypt & mount it the usual way:

# do we need to chown the mount again?
# sudo chown `whoami`:`whoami` /nvme1
sudo zfs load-key nvme1
sudo zfs mount nvme1

@adamarbour
Copy link

adamarbour commented Jun 14, 2024

FYI - I modified this a bit to use only pam no need for a service. I made the following modifications. This unlocks and locks whenever I logout.

file: /sbin/mount-zfs-homedir

#!/bin/bash

set -eu

USER=$PAM_USER
PASS=$(cat -)
TYPE=$PAM_TYPE
SESSION_COUNT=$(ps -u $USER -o user= | wc -l)

zfs get -s local -H -o name,value canmount | while read volname canmount; do
    [[ $canmount = 'noauto' ]] || continue
    
    user=$(zfs get -s local -H -o value void.automount.homedir:user $volname) # NOTE: I created my own property
    [[ $user = $USER ]] || continue

    if [ "$TYPE" = "auth" ]; then
        MOUNTPOINT="$(zfs get -o value -H -r mountpoint "$volname")"
        findmnt "$MOUNTPOINT" && continue

        zfs load-key "$volname" <<< "$PASS" || continue
        zfs mount "$volname" || true
    fi

    if [ "$TYPE" = "close_session" ] && [ "$SESSION_COUNT" -eq 0 ]; then
        zfs unmount "$volname" || continue
        zfs unload-key "$volname" || true
    fi
done

file: /etc/pam.d/system-login
NOTE: This might be different on other systems

....
auth         optional    pam_exec.so    expose_authtok    /sbin/mount-zfs-home-dir
...
...
...
session    optional    pam_exec.so    /sbin/mount-zfs-homedir
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment