Skip to content

Instantly share code, notes, and snippets.

@ctsrc
Last active November 9, 2024 18:37
Show Gist options
  • Save ctsrc/9a72bc9a0229496aab5e4d3745af0bb9 to your computer and use it in GitHub Desktop.
Save ctsrc/9a72bc9a0229496aab5e4d3745af0bb9 to your computer and use it in GitHub Desktop.
Install FreeBSD 14.1 on Hetzner

Install FreeBSD 14.1 on Hetzner server

Hetzner no longer offers direct install of FreeBSD, but we can do it ourselves. Here is how :)

Boot the server into rescue mode

Boot the Hetzner server in Hetzner Debian based rescue mode. ssh into it.

The Hetzner rescue image will tell you hardware details about the server in the login banner. For example, with one of my servers I see:

Hardware data:

   CPU1: AMD Ryzen 9 3900 12-Core Processor (Cores 24)
   Memory:  64243 MB
   Disk /dev/nvme0n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
   Disk /dev/sda: 10000 GB (=> 9314 GiB) doesn't contain a valid partition table
   Disk /dev/sdb: 10000 GB (=> 9314 GiB) doesn't contain a valid partition table
   Disk /dev/sdc: 12 TB (=> 10 TiB) doesn't contain a valid partition table
   Total capacity 30 TiB with 4 Disks

Network data:
   eth0  LINK: yes
         MAC:  xx:xx:xx:xx:xx:xx
         IP:   xxx.xxx.xxx.xxx
         IPv6: xxxx:xxx:xxx:xxxx::x/64
         Intel(R) Gigabit Ethernet Network Driver

(MAC, IPv4 and IPv6 address redacted by me in the above example output. You'll see actual values.)

In the case of this particular system, I have three HDDs and one NVMe SSD.

And check SATA link speeds:

dmesg | grep -i sata | grep 'link up'

Output:

[Tue Oct 15 23:58:42 2024] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[Tue Oct 15 23:58:42 2024] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[Tue Oct 15 23:58:43 2024] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

And physical block sizes of HDDs:

dmesg | grep 'physical blocks'

Output:

[Tue Oct 15 23:58:45 2024] sd 2:0:0:0: [sdc] 4096-byte physical blocks
[Tue Oct 15 23:58:45 2024] sd 0:0:0:0: [sda] 4096-byte physical blocks
[Tue Oct 15 23:58:45 2024] sd 1:0:0:0: [sdb] 4096-byte physical blocks

Cool, the three HDDs have 4K physical block size. Yours might be smaller, or not reported in dmesg output. Have a look online to see if you can find out what the physical block size is for your HDDs based on their model.

You can see the model of your HDDs and/or SSDs like this:

lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT,SIZE,MODEL

Output:

NAME    FSTYPE LABEL MOUNTPOINT   SIZE MODEL
loop0   ext2                      3.2G
sda                               9.1T ST10000NM0156-2AA111
sdb                               9.1T ST10000NM0156-2AA111
sdc                              10.9T ST12000NM003G-2MT113
nvme0n1                         953.9G SAMSUNG MZVL21T0HCLR-00B00

Open a screen session, so that if we lose connection to server while in the process of setup, we can quickly attach to a tmux session and pick up work again right away.

screen

Caution

The disadvantage of running in screen or tmux is that it will mess up the text shown in the FreeBSD installer a bit.

Tip

In a future update of this guide, I will check if there are any steps we can take to make the bsdinstaller not mess up the text when running in screen or tmux.

Retrieve mfsBSD and run it in QEMU with raw drives attached

basically have a mini VPS with mfsbsd running with real disk passthrough and console access, just like a KVM, so I can install as usual - and then I can even test my installation directly by booting from it in the same way! Then when it works I just boot the server normal (ie directly into FreeBSD) and if I ever b0rk something up I boot the Linux rescue image and run mfsbsd again!

Source: https://www.reddit.com/r/freebsd/comments/wf7h34/hetzner_has_silently_dropped_support_for_freebsd/ijcxgvb/

Retrieve mfsBSD.

wget https://mfsbsd.vx.sk/files/iso/14/amd64/mfsbsd-14.1-RELEASE-amd64.iso
sha256sum mfsbsd-14.1-RELEASE-amd64.iso

SHA-256 hashsum:

c3bf0eb314bfcc372eccc30917a32d156416f6ad23b63ff37fe4034d533fc09a  mfsbsd-14.1-RELEASE-amd64.iso

Start mfsBSD in QEMU with the raw drives from the machine attached:

qemu-system-x86_64 \
    -cdrom mfsbsd-14.1-RELEASE-amd64.iso \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/sda,if=virtio \
    -drive format=raw,file=/dev/sdb,if=virtio \
    -drive format=raw,file=/dev/sdc,if=virtio \
    \
    -display curses \
    -boot d \
    -m 8G

Note

Instead of attaching all the storage devices to QEMU, you may wish to attach only those that you want to use in the ZFS pool where the system will be installed. Throughout this guide, I will attach all drives every time anyway, partly because it makes updating the guide easier as I sometimes use different servers with different types and number of drives when editing this guide.

Start install

Log in from the console

  • login: root
  • password: mfsroot

Proceed to either of the following:

  • Perform a standard install of FreeBSD as described in 01_standard_install.md below, or
  • make a custom install of FreeBSD as described in 02_custom_install.md below

Standard install of FreeBSD

Start the FreeBSD installer

bsdinstall

Proceed with installation. When done, "power off" the qemu VM

poweroff

Check that it works

Now boot the physical drives in qemu without having the CD ISO attached.

qemu-system-x86_64 \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/sda,if=virtio \
    -drive format=raw,file=/dev/sdb,if=virtio \
    -drive format=raw,file=/dev/sdc,if=virtio \
    \
    -nic user,hostfwd=tcp::2222-:22 \
    -display curses \
    -boot d \
    -m 8G

Before you reboot the host machine

Qemu provides an emulated NIC to the VM. So if the physical network in the host uses a NIC that needs a different driver, the NIC name will be different in the VM from what it will be when running FreeBSD on the hardware.

The Qemu NIC will appear as em0.

However, in my case the physical NIC in the machine uses a different driver and appears as igb0 when running FreeBSD on the hardware.

The Hetzner Debian based rescue system will give you a minimal description of the NIC in the machine when you ssh into it. Make note of that. If it's Intel, you can put an entry for both igb0 in addition to em0 in your /etc/rc.conf and then when you boot and ssh into the machine you will see which one was used, and then you can update your /etc/rc.conf accordingly.

If the NIC has a RealTek chipset, it'll probably be re0 that you should put an entry for in your /etc/rc.conf.

If the NIC is neither Intel nor RealTek, you have to find out what Linux commands to use in the Hetzner Debian based rescue system to show more details about your NIC, and then you need to figure out which FreeBSD NIC driver is correct for that one and edit your /etc/rc.conf accordingly.

For reference, here is what the complete /etc/rc.conf from one of my Hetzner servers looks like currently:

clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="de5"

# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"

# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"

local_unbound_enable="YES"

sshd_enable="YES"

ntpd_enable="YES"
ntpd_sync_on_start="YES"

moused_nondefault_enable="NO"

# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

zfs_enable="YES"

wireguard_enable="YES"
wireguard_interfaces="wg0"

jail_enable="YES"

Moment of truth

Reboot the host machine. All goes well, you'll be able to ssh into it and find a running FreeBSD system :D

That's it, you're done!

Custom install of FreeBSD

For many (most?) purposes, the standard install described above is sufficient. It's straight forward, and easy to fix if when something breaks.

The standard install described above however does not encrypt most parts of the system. And while you can add additional individual encrypted datasets to your ZFS pool even with a standard install, you will not be able to turn on encryption for any of the ZFS datasets that have been created by the installer. Wouldn't it be nice if we could reduce the amount of data that is kept unencrypted at rest at least a bit? One of the motivations of the custom install described here is to do exactly that.

Defining our goals

For my server there are some specific things I am interested in achieving:

  • Keep as much of the system as possible encrypted at rest. With data encrypted at rest, and the keys to decrypt that data kept separate, we can recycle the harddrives in the future without needing to do overwrites of the drives first. This is desirable for multiple reasons:
    • Big drives take a long time to fully overwrite. Especially so when you do one pass of writing zeros followed by one or more passes of writing random data to completely cover the drives.
    • Hardware failures can leave us unable to fully or even partially being able to overwriting the data, meaning that safe disposal will hinge on being able to sufficiently physically destroying the drives.
  • The base system should be possible to throw away and set up again quickly and easily.
    • Corollary: None of the system directory trees should be included in backups. Not even /usr/home as a whole. We'll get back to this.
  • Anything that is important should live in jails, with their own ZFS datasets.
    • This way, we can back up as well as restore or rollback to past versions of those "things" mostly independently of the host system itself.

Initial install

We will start off with a standard install.

This will form the basis for our "outer" base system. We will use this one to boot the server into a state where we can ssh into it to unlock our remaining datasets, from which we can then reboot into our "inner" base system.

It'll work similar to how it's done in https://github.com/emtiu/freebsd-outerbase

Deciding on the configuration of your ZFS pool(s)

On the server I am currently setting up while updating this guide, we have 4 drives total. One NVMe SSD and three HDDs.

I go a bit back and forth from time to time, sometimes using separate pools for system and data of interest, and sometimes setting up servers with one big pool for everything.

This time around, I will set up the server with one pool for everything, spanning the three HDDs, and I will use the NVMe SSD as SLOG device for that pool.

There are some tradeoffs, in both directions when choosing to have one pool for everything or separate pools for system and data of interest.

Disadvantages of having a separate pool for the system include:

  • If the system pool consists of a single drive, we lose out on some of the ZFS healing properties for the system install itself.
  • If the total number of drives is low, we lose out on drives for our data pool that could otherwise provide additional redundancy or capacity for our data.

The main advantage of having a separate pool for the system, as I see it is this:

  • As long as you remember which drive or set of drives the system was installed to, you should be able to completely reinstall the system overwriting all data you previously had on that or those drives, while your important data you want to keep is safely kept in its separate pool on its separate drives.

Note

When I say "remember which", I really mean "write it down somewhere obvious, where you can find it".

For that reason, I used separate pools on the most recent system I set up prior to this. But this time I am setting it up all on one pool because I want to try having a pool with synchronous writes and an SLOG device, and this system only has one SSD and three HDDs.

Which configuration to use, in terms of number of pools and in terms of the setup of the ZFS pool(s) themselves will depend on the number of drives you have and what your routines for managing backups and restores will be like.

A word on backups

Regardless of whether you choose to keep separate pools for system and data, or everything on one pool, there is one thing that is more important than all else:

Important

Always backup your data! This means:

  • Having backups in other physical locations. For example:
    • One encrypted copy of your backups on a separate server, in a different data center, and
    • One encrypted copy of your backups at home (if the data is yours) or office (if the data belongs to a business with an office), and
    • One encrypted copy of your backups in the cloud.
  • Regularly verifying that backups are kept up to date, and that the backups are complete and correct.
  • Regularly verifying that you can actually restore from the backups.
  • Occasionally verifying that you can set up a new server with the services that you need in order to replace the current server, so that whatever serving or processing you are doing with your data on your current server can continue there. Ideally with as little interruption to service as possible.

If you can't afford to keep as many as three separate backup locations now, start with just one of them. One is much better than none, even though more is better.

Configuring backups is beyond the scope of this guide. I will probably write a separate guide on that topic in the future. When that happens I will add a link to that guide from here.

Check which disks are which

In the QEMU VM, all our disks appear as virtual disks, because they use a virtual driver, even though they are connected as raw disks.

In situations where you have disks with different physical properties that you care about when installing FreeBSD (i.e. unless all of them have the same capacities, link speeds, etc.), you want to be sure of which is which.

Note

As mentioned earlier on in the guide, an alternative to attaching all the physical disks to the QEMU VM is to attach only those disks which you intend to use during the install of FreeBSD. If you did that and all disks in the QEMU VM are those that you intend to use, you don't have to check which is which at this stage. (Although double-checking at this point can still be a good idea.)

For example, for my install I want to create the zpool so that it is a raidz vdev consisting of the three HDDs. I will then add the SLOG device at a later stage after the initial install is done with.

geom disk list

Output:

Geom name: vtbd0
Providers:
1. Name: vtbd0
   Mediasize: 1024209543168 (954G)
   Sectorsize: 512
   Mode: r0w0e0
   descr: (null)
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 16

Geom name: vtbd1
Providers:
1. Name: vtbd1
   Mediasize: 10000831348736 (9.1T)
   Sectorsize: 512
   Mode: r0w0e0
   descr: (null)
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 16

Geom name: vtbd2
Providers:
1. Name: vtbd2
   Mediasize: 10000831348736 (9.1T)
   Sectorsize: 512
   Mode: r0w0e0
   descr: (null)
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 16

Geom name: vtbd3
Providers:
1. Name: vtbd3
   Mediasize: 12000138625024 (11T)
   Sectorsize: 512
   Mode: r0w0e0
   descr: (null)
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 16

So in my case I will want to select the devices vtbd1, vtbd2, and vtbd3 during install.

Performing the install

Run

bsdinstall
  • For hostname I choose stage4, because the normal boot itself has 3 stages and this will be our fourth stage of sort of booting.
  • At the partitioning step we do guided root on ZFS, and in my case I select:
    • Pool Type/Disks to consist of a raidz vdev with three drives (the three HDDs)
    • Force 4K Sectors? to YES
      • In my case this was already pre-selected, and this is what I want because of the physical sector sizes of my HDDs.
    • Encrypt Disks to NO
      • Remember, this is the "outer" base system. The "outer" base system is unencrypted, but will hold none of our service configurations or any of our data short of a default install running an SSH server.
    • Partition Scheme to GPT (UEFI)
    • Swap Size to 0
      • Some people insist that having no swap is a terrible idea. I prefer having no swap.
  • At the user creation step, after you've created a password for root, create a user that has "boot" as part of its name, to distinguish it from the kinds of users you normally make on your servers. For example, I usually make my user named "erikn" but here I name it erikboot. When asked if you want to add the user to any additional groups, make sure to add the user to the wheel group.
  • Keep ssh selected as a service to run.
  • For all other steps make whatever choices you'd normally make according to your preference.

Finish initial steps

Export the zpool and then power off the VM.

zpool export zroot
poweroff

Check that it works so far

Now it's time to boot the VM again, but without the mfsBSD media.

In order to boot EFI in QEMU we need some extra files from https://www.kraxel.org/repos/jenkins/edk2/ as mentioned at https://wiki.freebsd.org/UEFI and also https://joonas.fi/2021/02/uefi-pc-boot-process-and-uefi-with-qemu/

wget https://www.kraxel.org/repos/jenkins/edk2/edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm
sha256sum edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm
bc42937c5c50b552dd7cd05ed535ed2b8aed30b04060032b7648ffeee2defb8e  edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm

Extract.

apt install -y rpm2cpio
rpm2cpio edk2.git-ovmf-x64-0-20220719.209.gf0064ac3af.EOL.no.nore.updates.noarch.rpm | cpio -idmv
./usr/share/doc/edk2.git-ovmf-x64
./usr/share/doc/edk2.git-ovmf-x64/README
./usr/share/edk2.git
./usr/share/edk2.git/ovmf-x64
./usr/share/edk2.git/ovmf-x64/MICROVM.fd
./usr/share/edk2.git/ovmf-x64/OVMF-need-smm.fd
./usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd
./usr/share/edk2.git/ovmf-x64/OVMF-with-csm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_CODE-need-smm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
./usr/share/edk2.git/ovmf-x64/OVMF_CODE-with-csm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_VARS-need-smm.fd
./usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd
./usr/share/edk2.git/ovmf-x64/OVMF_VARS-with-csm.fd
./usr/share/edk2.git/ovmf-x64/UefiShell.iso
./usr/share/qemu/firmware/80-ovmf-x64-git-need-smm.json
./usr/share/qemu/firmware/81-ovmf-x64-git-pure-efi.json
./usr/share/qemu/firmware/82-ovmf-x64-git-with-csm.json
37888 blocks

Boot

qemu-system-x86_64 \
    \
    -drive if=pflash,format=raw,unit=0,readonly=on,file=usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
    -drive if=pflash,format=raw,unit=1,file=usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/sda,if=virtio \
    -drive format=raw,file=/dev/sdb,if=virtio \
    -drive format=raw,file=/dev/sdc,if=virtio \
    \
    -nic user,hostfwd=tcp::2222-:22 \
    -vnc 127.0.0.1:1,password=on -k en-us -monitor stdio \
    -boot d \
    -m 8G

From the qemu console, use command change vnc password to set VNC password as per https://wiki.archlinux.org/title/QEMU#VNC

Then forward port 5901 from the server to your machine over SSH and then connect to VNC over the forwarded port.

Run this command in a new terminal on your computer to forward the port:

ssh -L 25901:127.0.0.1:5901 yourserver.example.com

(Subsitute your actual server DNS name or IP address in place of yourserver.example.com)

Then connect to VNC from your machine using the forwarded port 127.0.0.1:25901.

VNC should show the FreeBSD console login prompt. Log in as root with the password you set during install.

Check the pool info and the datasets that have been created so far.

zpool status

Output:

  pool: zroot
 state: ONLINE
config:

	NAME         STATE     READ WRITE CKSUM
	zroot        ONLINE       0     0     0
	  raidz1-0   ONLINE       0     0     0
	    vtbd1p2  ONLINE       0     0     0
	    vtbd2p2  ONLINE       0     0     0
	    vtbd3p2  ONLINE       0     0     0

errors: No known data errors
zfs list

Output:

NAME                  USED  AVAIL  REFER  MOUNTPOINT
zroot                 852M  18.0T   128K  /zroot
zroot/ROOT            849M  18.0T   128K  none
zroot/ROOT/default    849M  18.0T   849M  /
zroot/home            309K  18.0T   128K  /home
zroot/home/erikboot   181K  18.0T   181K  /home/erikboot
zroot/tmp             128K  18.0T   128K  /tmp
zroot/usr             384K  18.0T   128K  /usr
zroot/usr/ports       128K  18.0T   128K  /usr/ports
zroot/usr/src         128K  18.0T   128K  /usr/src
zroot/var             842K  18.0T   128K  /var
zroot/var/audit       128K  18.0T   128K  /var/audit
zroot/var/crash       128K  18.0T   128K  /var/crash
zroot/var/log         202K  18.0T   202K  /var/log
zroot/var/mail        128K  18.0T   128K  /var/mail
zroot/var/tmp         128K  18.0T   128K  /var/tmp
zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  27.3T  1.25G  27.3T        -         -     0%     0%  1.00x    ONLINE  -

Export the zpool and shutdown the machine VM. Boot with mfsBSD media again.

zpool export zroot
poweroff

Note

Depending on what services you chose to run when you installed FreeBSD, it might not be possible to export the zpool at this point. For example, it might say that /var/log is busy. In that case, don't worry – power off the machine with the poweroff command even if you were not able to export the zpool.

Boot with mfsBSD again

qemu-system-x86_64 \
    -cdrom mfsbsd-14.1-RELEASE-amd64.iso \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/sda,if=virtio \
    -drive format=raw,file=/dev/sdb,if=virtio \
    -drive format=raw,file=/dev/sdc,if=virtio \
    \
    -display curses \
    -boot d \
    -m 8G

Once the console reaches the login screen, log in with the same mfsBSD credentials as before:

  • login: root
  • password: mfsroot

Initial ZFS datasets

Import pool

zpool import -o altroot=/mnt -f zroot

Get rid of the datasets that we don't want

zfs destroy zroot/var/tmp
zfs destroy zroot/var/mail
zfs destroy zroot/var/log
zfs destroy zroot/var/crash
zfs destroy zroot/var/audit
zfs destroy zroot/var
zfs destroy zroot/usr/src
zfs destroy zroot/usr/ports
zfs destroy zroot/usr
zfs destroy zroot/tmp
zfs destroy -r zroot/home

Now we are left with only the datasets we want to have

zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot                809M   914G    96K  /mnt/zroot
zroot/ROOT           807M   914G    96K  none
zroot/ROOT/default   807M   914G   807M  /mnt

Unmount the zroot dataset.

mount
/dev/md0 on / (ufs, local, read-only)
devfs on /dev (devfs)
tmpfs on /rw (tmpfs, local)
devfs on /rw/dev (devfs)
zroot on /rw/mnt/zroot (zfs, local, noatime, nfsv4acls)
zfs umount zroot

Of course, now there are a bunch of files that we want to have which we no longer have, since they were put on those other datasets by bsdinstall.

Let's fix that.

Restore the files we want to keep

zfs mount zroot/ROOT/default
cd /mnt/tmp/
fetch https://download.freebsd.org/releases/amd64/14.1-RELEASE/base.txz
sha256sum base.txz
bb451694e8435e646b5ff7ddc5e94d5c6c9649f125837a34b2a2dd419732f347  base.txz
cd /mnt/
tar xv --keep-old-files -f /mnt/tmp/base.txz
cd /
chroot /mnt/
getent passwd
[...]
erikboot:[...]:1001:1001:Boot user:/home/erikboot:/bin/sh
mkdir /home/erikboot
chown erikboot:erikboot /home/erikboot
chmod 751 /home/erikboot
chmod 1777 /tmp

Prepare system

Create ssh authorized keys file for non-root user

su - erikboot
mkdir .ssh
chmod 700 .ssh
cat > .ssh/authorized_keys <<EOF
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAUKA+TEFQzOqj6Ahb0kCg4p78QTzoN+mZHlE4BTQ+tY erikn@milkyway
EOF

Caution

Use your own public key when creating authorized keys file for your own non-root user. Not mine. You are the one that needs to be able to log in to your server, not me!

Exit shell we spawned with su

exit

Disallow password login over ssh for the outer system (our stage4 install that we are currently chrooted into) by setting KbdInteractiveAuthentication to no in /etc/ssh/sshd_config.

KbdInteractiveAuthentication no

Getting ready for the finishing touches

Exit chroot

exit

Export pool and power off QEMU.

zpool export zroot
poweroff

Boot into VM without mfsBSD

qemu-system-x86_64 \
    \
    -drive if=pflash,format=raw,unit=0,readonly=on,file=usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
    -drive if=pflash,format=raw,unit=1,file=usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/sda,if=virtio \
    -drive format=raw,file=/dev/sdb,if=virtio \
    -drive format=raw,file=/dev/sdc,if=virtio \
    \
    -nic user,hostfwd=tcp::2222-:22 \
    -vnc 127.0.0.1:1,password=on -k en-us -monitor stdio \
    -boot d \
    -m 8G

From the qemu console, use command change vnc password to set VNC password as before.

Then, log in as root over forwarded VNC as before, using the root password that you created during the install.

Install some packages for the outer system.

pkg install -y doas tree neovim zsh tmux

Create config file for doas.

cat > /usr/local/etc/doas.conf <<EOF
permit nopass :wheel
EOF

ssh into the QEMU VM

The outer and inner systems will have different host keys for ssh.

In order to properly keep track of the known hosts for the outer and inner system on your client (such as your laptop), you can create entries similar to the following in your ~/.ssh/config on your client:

Host de5-recovery
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de5-recovery:22"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de5-recovery/id_ed25519_root
	User root
	RequestTTY yes

Host de5-stage4-qemu
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de5-stage4-qemu:2222"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de5-stage4/id_ed25519_erikboot
	Port 2222
	User erikboot
	RequestTTY yes

Host de5-stage4
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de5-stage4:22"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de5-stage4/id_ed25519_erikboot
	User erikboot
	RequestTTY yes

Host de5-inner-qemu
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de5-inner-qemu:2222"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de5-inner/id_ed25519_erikn
	Port 2222
	RequestTTY yes

Host de5-inner
	AddKeysToAgent yes
	UseKeychain yes
	HostName xxx.xxx.xxx.xxx
	HostkeyAlias "de5-inner:22"
	IdentityFile ~/.ssh/host_specific/erik/hetzner/de5-inner/id_ed25519_erikn
	RequestTTY yes

After you have done that, ssh into the outer system currently running in QEMU from your client using your equivalent of my erikboot non-root user, with the alias you created for your server with the outer system running in QEMU. In my case, de5-stage4-qemu.

ssh de5-stage4-qemu

Reservation

Create a dataset that will reserve 20% of the capacity of the pool, as per recommendation from Michael W Lucas in the book FreeBSD Mastery: Advanced ZFS.

doas zfs create -o refreservation=360G -o canmount=off -o readonly=on -o mountpoint=none zroot/reservation
zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot                361G  17.7T   128K  /zroot
zroot/ROOT          1.16G  17.7T   128K  none
zroot/ROOT/default  1.16G  17.7T  1.16G  /
zroot/reservation    360G  18.0T   128K  none

Prepare and mount encrypted dataset for "inner"

doas zfs create -o mountpoint=none -o encryption=on -o keyformat=passphrase zroot/IROOT
Enter new passphrase:
Re-enter new passphrase:

Create a passphrase and remember it. You will use this passphrase every time you have ssh-ed into the outer system after a reboot to decrypt the inner system.

doas zfs create -o mountpoint=none zroot/IROOT/default
zfs list -o name,used,avail,refer,mountpoint,encryption,keyformat
NAME                  USED  AVAIL  REFER  MOUNTPOINT  ENCRYPTION   KEYFORMAT
zroot                 361G  17.7T   128K  /zroot      off          none
zroot/IROOT           490K  17.7T   245K  none        aes-256-gcm  passphrase
zroot/IROOT/default   245K  17.7T   245K  none        aes-256-gcm  passphrase
zroot/ROOT           1.16G  17.7T   128K  none        off          none
zroot/ROOT/default   1.16G  17.7T  1.16G  /           off          none
zroot/reservation     360G  18.0T   128K  none        off          none
doas zfs set -u mountpoint=/mnt zroot/IROOT/default
doas zfs mount zroot/IROOT/default
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/IROOT/default on /mnt (zfs, local, noatime, nfsv4acls)

Install "inner"

doas bsdinstall

Choose hostname as inner.

On the partitioning step, choose "Shell" ("Open a shell and partition by hand"). We've already done the partitioning and mounted the target so proceed to exit.

exit

Select mirror as usual, and the installer will then extract the system.

After it finishes, exit the installer and have a look at the extracted files.

ls -al /mnt
total 234
drwxr-xr-x  19 root wheel   24 Oct 16 02:34 .
drwxr-xr-x  20 root wheel   25 Oct 16 02:12 ..
-rw-r--r--   2 root wheel 1011 May 31 11:00 .cshrc
-rw-r--r--   2 root wheel  495 May 31 11:00 .profile
-r--r--r--   1 root wheel 6109 May 31 11:39 COPYRIGHT
drwxr-xr-x   2 root wheel   49 May 31 11:00 bin
drwxr-xr-x  14 root wheel   70 Oct 16 02:34 boot
dr-xr-xr-x   2 root wheel    3 Oct 16 02:33 dev
-rw-------   1 root wheel 4096 Oct 16 02:34 entropy
drwxr-xr-x  30 root wheel  107 Oct 16 02:34 etc
drwxr-xr-x   3 root wheel    3 Oct 16 02:33 home
drwxr-xr-x   4 root wheel   78 May 31 11:08 lib
drwxr-xr-x   3 root wheel    5 May 31 10:58 libexec
drwxr-xr-x   2 root wheel    2 May 31 10:32 media
drwxr-xr-x   2 root wheel    2 May 31 10:32 mnt
drwxr-xr-x   2 root wheel    2 May 31 10:32 net
dr-xr-xr-x   2 root wheel    2 May 31 10:32 proc
drwxr-xr-x   2 root wheel  150 May 31 11:04 rescue
drwxr-x---   2 root wheel    7 May 31 11:39 root
drwxr-xr-x   2 root wheel  150 May 31 11:27 sbin
lrwxr-xr-x   1 root wheel   11 May 31 10:32 sys -> usr/src/sys
drwxrwxrwt   2 root wheel    2 May 31 10:32 tmp
drwxr-xr-x  15 root wheel   15 May 31 11:49 usr
drwxr-xr-x  24 root wheel   24 May 31 10:32 var

Give same hostid to inner as outer has, so that zpool import will not think pool has been used by a different system.

doas cp /etc/hostid /mnt/etc/hostid

And create authorized keys for the inner user.

mkdir /mnt/home/erikn/.ssh/
chown 1001:1001 /mnt/home/erikn/.ssh/
chmod 700 /mnt/home/erikn/.ssh/
cat > /mnt/home/erikn/.ssh/authorized_keys <<EOF
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHzzLyhwn81G0lQq/7ZD0N/cUgaMJ04V9synwyrHOtqZ erikn@milkyway
EOF

Caution

As with the outer system, use a public key of your own when creating authorized keys file for your own non-root user. Not mine.

Note that we specified different ssh public keys to log in to the "outer" and "inner" systems.

Edit rc conf files of outer and inner.

doas nvim /etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="stage4"

# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"

# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"

local_unbound_enable="YES"

sshd_enable="YES"

ntpd_enable="YES"
ntpd_sync_on_start="YES"

moused_nondefault_enable="NO"

# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

zfs_enable="YES"
doas nvim /mnt/etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="inner"

# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"

# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"

local_unbound_enable="YES"

sshd_enable="YES"

moused_nondefault_enable="NO"

# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"

zfs_enable="YES"

wireguard_enable="YES"
wireguard_interfaces="wg0"

jail_enable="YES"

Power off the VM, and then power it on and ssh into it as your "outer" local user again (whatever equivalent you have of my erikboot user).

Then, unset mountpoint for inner

doas zfs set mountpoint=none zroot/IROOT/default

Decrypt it

doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':

And attempt to reboot into it

doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r

If you're watching on VNC you'll see that it says

Trying to mount root from zfs:zroot/IROOT/default []...

and after a little bit of time you should see that it gives the login prompt with the hostname of the inner system

FreeBSD/amd64 (inner) (ttyv0)

login:

And then ssh using the relevant alias. In this case, for me, it's my de5-inner-qemu.

ssh de5-inner-qemu

Switch to the root user, using the root password that you set during the install of the inner system.

su -

Install some packages in the inner system.

pkg install -y doas tree neovim zsh tmux

Create config file for doas in the inner system.

cat > /usr/local/etc/doas.conf <<EOF
permit nopass :wheel
EOF

Disallow password login over ssh by setting KbdInteractiveAuthentication to no in /etc/ssh/sshd_config in the inner system.

KbdInteractiveAuthentication no

Power off the QEMU VM.

Check if UEFI boot is enabled

On the host system, in the Hetzner Rescue environment, run:

efibootmgr

If the output says:

EFI variables are not supported on this system.

then you need to send a support ticket to Hetzner to ask them to turn on UEFI for you.

https://docs.hetzner.com/robot/dedicated-server/operating-systems/uefi/

In the meantime, power off the host machine.

Moment of truth

After you have UEFI enabled by Hetzner support, or if it was already enabled according to the output of efibootmgr, boot the machine, and you should be able to ssh into outer system using your ssh host alias for it.

ssh de5-stage4

Rebooting into the inner system

Now that your machine is running FreeBSD on the metal, and you have logged in to the outer system via ssh, it's time to reboot into the inner system.

Decrypt the ZFS datasets for the inner system.

doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':

And attempt to reboot into it

doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r

Wait a bit for the system to reboot. Give it a minute or two. Then, ssh into the inner system using your ssh host alias for it.

ssh de5-inner

Adding SLOG device to zpool

TODO: Add this section.

Fixing problems

If problems arise booting into the system, for example after a system upgrade, boot the server into rescue mode again and ssh into it. Then

wget https://mfsbsd.vx.sk/files/iso/14/amd64/mfsbsd-14.1-RELEASE-amd64.iso

qemu-system-x86_64 \
    -cdrom mfsbsd-14.1-RELEASE-amd64.iso \
    \
    -drive format=raw,file=/dev/nvme0n1,if=virtio \
    -drive format=raw,file=/dev/nvme1n1,if=virtio \
    -drive format=raw,file=/dev/nvme2n1,if=virtio \
    -drive format=raw,file=/dev/nvme3n1,if=virtio \
    \
    -display curses \
    -boot d \
    -m 8G

And then once inside the VM, import the ZFS pool with altroot specified

zpool import -o altroot=/mnt -f zroot

Then take it from there.

@lispstudent
Copy link

Thank you for this, I have been using it with excellent results. You might wish to update mfsBSD ISO image to 14.1: https://mfsbsd.vx.sk/

@ctsrc
Copy link
Author

ctsrc commented Oct 11, 2024

Thank you for this, I have been using it with excellent results. You might wish to update mfsBSD ISO image to 14.1: https://mfsbsd.vx.sk/

Glad to hear that @lispstudent :)

I've updated the guide for FreeBSD 14.1 now 👌🏻

@bretton
Copy link

bretton commented Oct 11, 2024

Hi, awesome and well done! Have you seen https://depenguin.me ? (14.1 is ready to go live)

also https://github.com/depenguin-me/depenguin-run and related

Some people have turned it into tools such as https://github.com/netzkommune/depenguin-provision

@ctsrc
Copy link
Author

ctsrc commented Oct 11, 2024

@bretton I haven't seen that one before. Nice tools you guys have there, I'll have to try it some time :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment