You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hetzner no longer offers direct install of FreeBSD, but we can do it ourselves. Here is how :)
Boot the server into rescue mode
Boot the Hetzner server in Hetzner Debian based rescue mode. ssh into it.
The Hetzner rescue image will tell you hardware details about the server in the login banner.
For example, with one of my servers I see:
Hardware data:
CPU1: AMD Ryzen 9 3900 12-Core Processor (Cores 24)
Memory: 64243 MB
Disk /dev/nvme0n1: 1024 GB (=> 953 GiB) doesn't contain a valid partition table
Disk /dev/sda: 10000 GB (=> 9314 GiB) doesn't contain a valid partition table
Disk /dev/sdb: 10000 GB (=> 9314 GiB) doesn't contain a valid partition table
Disk /dev/sdc: 12 TB (=> 10 TiB) doesn't contain a valid partition table
Total capacity 30 TiB with 4 Disks
Network data:
eth0 LINK: yes
MAC: xx:xx:xx:xx:xx:xx
IP: xxx.xxx.xxx.xxx
IPv6: xxxx:xxx:xxx:xxxx::x/64
Intel(R) Gigabit Ethernet Network Driver
(MAC, IPv4 and IPv6 address redacted by me in the above example output. You'll see actual values.)
In the case of this particular system, I have three HDDs and one NVMe SSD.
And check SATA link speeds:
dmesg | grep -i sata | grep 'link up'
Output:
[Tue Oct 15 23:58:42 2024] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[Tue Oct 15 23:58:42 2024] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[Tue Oct 15 23:58:43 2024] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Cool, the three HDDs have 4K physical block size. Yours might be smaller, or not reported in dmesg output.
Have a look online to see if you can find out what the physical block size is for your HDDs based on their model.
You can see the model of your HDDs and/or SSDs like this:
Open a screen session, so that if we lose connection to server while in the process of setup,
we can quickly attach to a tmux session and pick up work again right away.
screen
Caution
The disadvantage of running in screen or tmux is that it will mess up
the text shown in the FreeBSD installer a bit.
Tip
In a future update of this guide, I will check if there are any steps we can take
to make the bsdinstaller not mess up the text when running in screen or tmux.
Retrieve mfsBSD and run it in QEMU with raw drives attached
basically have a mini VPS with mfsbsd running with real disk passthrough and console access, just like a KVM,
so I can install as usual - and then I can even test my installation directly by booting from it in the same way!
Then when it works I just boot the server normal (ie directly into FreeBSD) and if I ever b0rk something up
I boot the Linux rescue image and run mfsbsd again!
Instead of attaching all the storage devices to QEMU, you may wish to attach only those
that you want to use in the ZFS pool where the system will be installed. Throughout this guide,
I will attach all drives every time anyway, partly because it makes updating the guide easier
as I sometimes use different servers with different types and number of drives when editing this guide.
Start install
Log in from the console
login: root
password: mfsroot
Proceed to either of the following:
Perform a standard install of FreeBSD as described in 01_standard_install.md below, or
make a custom install of FreeBSD as described in 02_custom_install.md below
Qemu provides an emulated NIC to the VM. So if the physical network in the host
uses a NIC that needs a different driver, the NIC name will be different in the VM
from what it will be when running FreeBSD on the hardware.
The Qemu NIC will appear as em0.
However, in my case the physical NIC in the machine uses a different driver and
appears as igb0 when running FreeBSD on the hardware.
The Hetzner Debian based rescue system will give you a minimal description of the NIC
in the machine when you ssh into it. Make note of that. If it's Intel, you can
put an entry for both igb0 in addition to em0 in your /etc/rc.conf
and then when you boot and ssh into the machine you will see which one was used,
and then you can update your /etc/rc.conf accordingly.
If the NIC has a RealTek chipset, it'll probably be re0 that you should
put an entry for in your /etc/rc.conf.
If the NIC is neither Intel nor RealTek, you have to find out what Linux commands to use
in the Hetzner Debian based rescue system to show more details about your NIC,
and then you need to figure out which FreeBSD NIC driver is correct for that one
and edit your /etc/rc.conf accordingly.
For reference, here is what the complete /etc/rc.conf from one of my Hetzner
servers looks like currently:
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="de5"
# Used when booting in Qemu
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
wireguard_enable="YES"
wireguard_interfaces="wg0"
jail_enable="YES"
Moment of truth
Reboot the host machine. All goes well, you'll be able to ssh into it and find
a running FreeBSD system :D
For many (most?) purposes, the standard install described above is sufficient.
It's straight forward, and easy to fix ifwhen something breaks.
The standard install described above however does not encrypt most parts of the system.
And while you can add additional individual encrypted datasets to your ZFS pool even with
a standard install, you will not be able to turn on encryption for any of the ZFS datasets
that have been created by the installer. Wouldn't it be nice if we could reduce the amount
of data that is kept unencrypted at rest at least a bit? One of the motivations of the custom
install described here is to do exactly that.
Defining our goals
For my server there are some specific things I am interested in achieving:
Keep as much of the system as possible encrypted at rest. With data encrypted at rest, and the keys to decrypt
that data kept separate, we can recycle the harddrives in the future without needing to do overwrites
of the drives first. This is desirable for multiple reasons:
Big drives take a long time to fully overwrite. Especially so when you do one pass of writing zeros
followed by one or more passes of writing random data to completely cover the drives.
Hardware failures can leave us unable to fully or even partially being able to overwriting
the data, meaning that safe disposal will hinge on being able to sufficiently physically destroying the drives.
The base system should be possible to throw away and set up again quickly and easily.
Corollary: None of the system directory trees should be included in backups.
Not even /usr/home as a whole. We'll get back to this.
Anything that is important should live in jails, with their own ZFS datasets.
This way, we can back up as well as restore or rollback to past versions of those "things"
mostly independently of the host system itself.
Initial install
We will start off with a standard install.
This will form the basis for our "outer" base system. We will use this one to boot the server into a state where
we can ssh into it to unlock our remaining datasets, from which we can then reboot into our "inner" base system.
On the server I am currently setting up while updating this guide, we have 4 drives total.
One NVMe SSD and three HDDs.
I go a bit back and forth from time to time, sometimes using separate pools for system and data of interest,
and sometimes setting up servers with one big pool for everything.
This time around, I will set up the server with one pool for everything, spanning the three HDDs,
and I will use the NVMe SSD as SLOG device for that pool.
There are some tradeoffs, in both directions when choosing to have one pool for everything or separate
pools for system and data of interest.
Disadvantages of having a separate pool for the system include:
If the system pool consists of a single drive, we lose out on some of the
ZFS healing properties for the system install itself.
If the total number of drives is low, we lose out on drives for our data
pool that could otherwise provide additional redundancy or capacity for our data.
The main advantage of having a separate pool for the system, as I see it is this:
As long as you remember which drive or set of drives the system was installed to, you should
be able to completely reinstall the system overwriting all data you previously
had on that or those drives, while your important data you want to keep is safely kept
in its separate pool on its separate drives.
Note
When I say "remember which", I really mean "write it down somewhere obvious, where you can find it".
For that reason, I used separate pools on the most recent system I set up prior to this.
But this time I am setting it up all on one pool because I want to try having a pool with
synchronous writes and an SLOG device, and this system only has one SSD and three HDDs.
Which configuration to use, in terms of number of pools and in terms of the setup
of the ZFS pool(s) themselves will depend on the number of drives you have and
what your routines for managing backups and restores will be like.
A word on backups
Regardless of whether you choose to keep separate pools for system and data, or everything
on one pool, there is one thing that is more important than all else:
Important
Always backup your data! This means:
Having backups in other physical locations. For example:
One encrypted copy of your backups on a separate server, in a different data center, and
One encrypted copy of your backups at home (if the data is yours)
or office (if the data belongs to a business with an office), and
One encrypted copy of your backups in the cloud.
Regularly verifying that backups are kept up to date, and that the backups are complete and correct.
Regularly verifying that you can actually restore from the backups.
Occasionally verifying that you can set up a new server with the services
that you need in order to replace the current server, so that whatever serving or
processing you are doing with your data on your current server can continue there.
Ideally with as little interruption to service as possible.
If you can't afford to keep as many as three separate backup locations now,
start with just one of them. One is much better than none, even though more is better.
Configuring backups is beyond the scope of this guide. I will probably write a separate guide
on that topic in the future. When that happens I will add a link to that guide from here.
Check which disks are which
In the QEMU VM, all our disks appear as virtual disks, because they use a virtual driver,
even though they are connected as raw disks.
In situations where you have disks with different physical properties that you care about
when installing FreeBSD (i.e. unless all of them have the same capacities, link speeds, etc.),
you want to be sure of which is which.
Note
As mentioned earlier on in the guide, an alternative to attaching all the physical
disks to the QEMU VM is to attach only those disks which you intend to use during
the install of FreeBSD. If you did that and all disks in the QEMU VM are those
that you intend to use, you don't have to check which is which at this stage.
(Although double-checking at this point can still be a good idea.)
For example, for my install I want to create the zpool so that it is a raidz vdev
consisting of the three HDDs. I will then add the SLOG device at a later stage after the
initial install is done with.
So in my case I will want to select the devices vtbd1, vtbd2, and vtbd3 during install.
Performing the install
Run
bsdinstall
For hostname I choose stage4, because the normal boot itself has 3 stages and this will be our fourth stage of sort of booting.
At the partitioning step we do guided root on ZFS, and in my case I select:
Pool Type/Disks to consist of a raidz vdev with three drives (the three HDDs)
Force 4K Sectors? to YES
In my case this was already pre-selected, and this is what I want because of the physical sector sizes of my HDDs.
Encrypt Disks to NO
Remember, this is the "outer" base system. The "outer" base system is unencrypted,
but will hold none of our service configurations or any of our data short of
a default install running an SSH server.
Partition Scheme to GPT (UEFI)
Swap Size to 0
Some people insist that having no swap is a terrible idea. I prefer having no swap.
At the user creation step, after you've created a password for root, create a user that has "boot" as part of its name,
to distinguish it from the kinds of users you normally make on your servers. For example, I usually make my user named
"erikn" but here I name it erikboot. When asked if you want to add the user to any additional groups,
make sure to add the user to the wheel group.
Keep ssh selected as a service to run.
For all other steps make whatever choices you'd normally make according to your preference.
Finish initial steps
Export the zpool and then power off the VM.
zpool export zroot
poweroff
Check that it works so far
Now it's time to boot the VM again, but without the mfsBSD media.
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 27.3T 1.25G 27.3T - - 0% 0% 1.00x ONLINE -
Export the zpool and shutdown the machine VM. Boot with mfsBSD media again.
zpool export zroot
poweroff
Note
Depending on what services you chose to run when you installed FreeBSD,
it might not be possible to export the zpool at this point. For example,
it might say that /var/log is busy. In that case, don't worry – power off
the machine with the poweroff command even if you were not able to export
the zpool.
Use your own public key when creating authorized keys file for your own non-root user. Not mine.
You are the one that needs to be able to log in to your server, not me!
Exit shell we spawned with su
exit
Disallow password login over ssh for the outer system (our stage4 install that we are currently chrooted into)
by setting KbdInteractiveAuthentication to no in /etc/ssh/sshd_config.
The outer and inner systems will have different host keys for ssh.
In order to properly keep track of the known hosts for the outer and inner system on your client (such as your laptop),
you can create entries similar to the following in your ~/.ssh/config on your client:
After you have done that, ssh into the outer system currently running in QEMU from your client
using your equivalent of my erikboot non-root user, with the alias you created for your server
with the outer system running in QEMU. In my case, de5-stage4-qemu.
ssh de5-stage4-qemu
Reservation
Create a dataset that will reserve 20% of the capacity of the pool,
as per recommendation from Michael W Lucas in the book FreeBSD Mastery: Advanced ZFS.
doas zfs create -o mountpoint=none -o encryption=on -o keyformat=passphrase zroot/IROOT
Enter new passphrase:
Re-enter new passphrase:
Create a passphrase and remember it. You will use this passphrase every time you
have ssh-ed into the outer system after a reboot to decrypt the inner system.
doas zfs create -o mountpoint=none zroot/IROOT/default
zfs list -o name,used,avail,refer,mountpoint,encryption,keyformat
NAME USED AVAIL REFER MOUNTPOINT ENCRYPTION KEYFORMAT
zroot 361G 17.7T 128K /zroot off none
zroot/IROOT 490K 17.7T 245K none aes-256-gcm passphrase
zroot/IROOT/default 245K 17.7T 245K none aes-256-gcm passphrase
zroot/ROOT 1.16G 17.7T 128K none off none
zroot/ROOT/default 1.16G 17.7T 1.16G / off none
zroot/reservation 360G 18.0T 128K none off none
doas zfs set -u mountpoint=/mnt zroot/IROOT/default
doas zfs mount zroot/IROOT/default
mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/gpt/efiboot0 on /boot/efi (msdosfs, local)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/IROOT/default on /mnt (zfs, local, noatime, nfsv4acls)
Install "inner"
doas bsdinstall
Choose hostname as inner.
On the partitioning step, choose "Shell" ("Open a shell and partition by hand").
We've already done the partitioning and mounted the target so proceed to exit.
exit
Select mirror as usual, and the installer will then extract the system.
After it finishes, exit the installer and have a look at the extracted files.
ls -al /mnt
total 234
drwxr-xr-x 19 root wheel 24 Oct 16 02:34 .
drwxr-xr-x 20 root wheel 25 Oct 16 02:12 ..
-rw-r--r-- 2 root wheel 1011 May 31 11:00 .cshrc
-rw-r--r-- 2 root wheel 495 May 31 11:00 .profile
-r--r--r-- 1 root wheel 6109 May 31 11:39 COPYRIGHT
drwxr-xr-x 2 root wheel 49 May 31 11:00 bin
drwxr-xr-x 14 root wheel 70 Oct 16 02:34 boot
dr-xr-xr-x 2 root wheel 3 Oct 16 02:33 dev
-rw------- 1 root wheel 4096 Oct 16 02:34 entropy
drwxr-xr-x 30 root wheel 107 Oct 16 02:34 etc
drwxr-xr-x 3 root wheel 3 Oct 16 02:33 home
drwxr-xr-x 4 root wheel 78 May 31 11:08 lib
drwxr-xr-x 3 root wheel 5 May 31 10:58 libexec
drwxr-xr-x 2 root wheel 2 May 31 10:32 media
drwxr-xr-x 2 root wheel 2 May 31 10:32 mnt
drwxr-xr-x 2 root wheel 2 May 31 10:32 net
dr-xr-xr-x 2 root wheel 2 May 31 10:32 proc
drwxr-xr-x 2 root wheel 150 May 31 11:04 rescue
drwxr-x--- 2 root wheel 7 May 31 11:39 root
drwxr-xr-x 2 root wheel 150 May 31 11:27 sbin
lrwxr-xr-x 1 root wheel 11 May 31 10:32 sys -> usr/src/sys
drwxrwxrwt 2 root wheel 2 May 31 10:32 tmp
drwxr-xr-x 15 root wheel 15 May 31 11:49 usr
drwxr-xr-x 24 root wheel 24 May 31 10:32 var
Give same hostid to inner as outer has, so that zpool import will not think pool has been used by a different system.
As with the outer system, use a public key of your own when creating authorized keys file
for your own non-root user. Not mine.
Note that we specified different ssh public keys to log in to the "outer" and "inner" systems.
Edit rc conf files of outer and inner.
doas nvim /etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="stage4"
# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"
local_unbound_enable="YES"
sshd_enable="YES"
ntpd_enable="YES"
ntpd_sync_on_start="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
doas nvim /mnt/etc/rc.conf
clear_tmp_enable="YES"
syslogd_flags="-ss"
hostname="inner"
# Used when booting in QEMU
ifconfig_em0="DHCP"
ifconfig_em0_ipv6="inet6 accept_rtadv"
# Used when booting on hardware
ifconfig_igb0_name="extif"
ifconfig_extif="DHCP"
local_unbound_enable="YES"
sshd_enable="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
wireguard_enable="YES"
wireguard_interfaces="wg0"
jail_enable="YES"
Power off the VM, and then power it on and ssh into it as your "outer" local user again
(whatever equivalent you have of my erikboot user).
Then, unset mountpoint for inner
doas zfs set mountpoint=none zroot/IROOT/default
Decrypt it
doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':
And attempt to reboot into it
doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r
If you're watching on VNC you'll see that it says
Trying to mount root from zfs:zroot/IROOT/default []...
and after a little bit of time you should see that it gives the login prompt with the hostname of the inner system
FreeBSD/amd64 (inner) (ttyv0)
login:
And then ssh using the relevant alias. In this case, for me, it's my de5-inner-qemu.
ssh de5-inner-qemu
Switch to the root user, using the root password that you set during the install of the inner system.
After you have UEFI enabled by Hetzner support, or if it was already enabled according to the output
of efibootmgr, boot the machine, and you should be able to ssh into outer system using
your ssh host alias for it.
ssh de5-stage4
Rebooting into the inner system
Now that your machine is running FreeBSD on the metal, and you have logged in to the outer system via ssh,
it's time to reboot into the inner system.
Decrypt the ZFS datasets for the inner system.
doas zfs load-key zroot/IROOT
Enter passphrase for 'zroot/IROOT':
And attempt to reboot into it
doas kenv vfs.root.mountfrom="zfs:zroot/IROOT/default"
doas reboot -r
Wait a bit for the system to reboot. Give it a minute or two. Then, ssh into the inner system
using your ssh host alias for it.
Thank you for this, I have been using it with excellent results. You might wish to update
mfsBSD
ISO image to 14.1: https://mfsbsd.vx.sk/