The TrueNAS installer doesn't have a way to use anything less than the full device. This is usually a waste of resources when installing to a modern NVMe which is usually several hundred of GB. TrueNAS SCALE will use only a few GB for its system files so installing to a 16GB partition would be helpful.
The easiest way to solve this is to modify the installer script before starting the installation process.
-
Boot TrueNAS Scale installer from USB stick/ISO
-
Select
shell
in the first menu (instead of installing) -
While in the shell, run the following commands:
sed -i 's/sgdisk -n3:0:0/sgdisk -n3:0:+16384M/g' /usr/sbin/truenas-install /usr/sbin/truenas-install
For TrueNAS Scale 24.10+ see this comment.
The first command modifies the installer script so that it creates a 16GiB boot-pool partition instead of using the full disk. The second command restarts the TrueNAS Scale installer.
-
Continue installing according to the official docs.
Step 7-12 in the deprecated guide has instructions on how to allocate the remaining space to a partition you can use for data. If you are using a single drive just ignore the steps that has to do with mirroring.
Unfortunately this is only possible by using an intermediate device to act as the installation disk and later move this data to the NVMe. Below I have documented the steps I took to get TrueNAS SCALE to run from a mirrored 16GB partition on NVMe disks.
For an easier initial partition please see this comment and the discussion that follows. Should remove the need to use a USB stick as a intermediate medium.
-
Install TrueNAS SCALE on a USB drive, preferrably 16GB in size. If you use a 32GB stick you must create a 32GB partition on the NVMe, wasting space that can be used for VMs and Docker/k8s applications.
-
Boot and enter a Linux shell as root. For example by enabling SSH service and login by root password.
-
Check available devices
$ parted (parted) print devices /dev/sdb (15.4GB) # boot device /dev/nvme0n1 (500GB) /dev/nvme1n1 (512GB) (parted) quit
If you only have one NVMe disk just ignore the instructions that include the second disk (nvme1n1). This disk is used to create a ZFS mirror to handle disk failures.
-
Clone the boot device to the other devices
$ cat /dev/sdb > /dev/nvme0n1 $ cat /dev/sdb > /dev/nvme1n1
-
Check the partition layout. Fix all the GPT space warning prompts that show up.
$ parted -l [...] Warning: Not all of the space available to /dev/nvme0n1 appears to be used, you can fix the GPT to use all of the space (an extra 946741296 blocks) or continue with the current setting? Fix/Ignore? f [...] Model: USB SanDisk 3.2Gen1 (scsi) Disk /dev/sdb: 15.4GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.5kB 1069kB 1049kB bios_grub 2 1069kB 538MB 537MB fat32 boot, esp 3 538MB 15.4GB 14.8GB zfs [...]
The other disks partition table should look identical to this.
-
Remove the zfs partition from the new devices, number 3 in this case. This is the boot-pool partition and we will recreate it later. The reason we remove it is that zfs will recognize metadata that makes it think it's part of the pool while it is not.
$ parted /dev/nvme0n1 rm Partition number? 3 Information: You may need to update /etc/fstab.
-
Recreate the boot-pool partition as a 16GiB large partition with a sligtly later start sector than before, make sure that it is on a sector divisable with 2048 for best performance (526336 % 2048 = 0). We also do this to make sure that zfs doesn't find any metadata from the old partition.
Start with the smaller disk if they are not identical.
$ parted (parted) unit kiB (parted) select /dev/nvme0n1 (parted) print Model: KINGSTON SNVS500GB (nvme) Disk /dev/nvme0n1: 488386584kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp (parted) mkpart boot-pool 526336kiB 17303552kiB (parted) print Model: KINGSTON SNVS500GB (nvme) Disk /dev/nvme0n1: 488386584kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp 3 526336kiB 17303552kiB 16777216kiB boot-pool
-
Now you can create a partition allocating the rest of the disk.
(parted) mkpart pool 17303552kiB 100% (parted) print Model: KINGSTON SNVS500GB (nvme) Disk /dev/nvme0n1: 488386584kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp 3 526336kiB 17303552kiB 16777216kiB boot-pool 4 17303552kiB 488386560kiB 471083008kiB pool
-
Do the same for the next device, but this time use the same values as in the printout above. We do this to make sure that the partitions are exactly the same size. In this example the disks are slightly different in size so using 100% on the second disk would create a partition larger than the one we just created on the smaller disk.
(parted) select /dev/nvme1n1 Using /dev/nvme1n1 (parted) mkpart boot-pool 526336kiB 17303552kiB (parted) mkpart pool 17303552kiB 488386560kiB (parted) print Model: TS512GMTE220S (nvme) Disk /dev/nvme1n1: 500107608kiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 20.0kiB 1044kiB 1024kiB bios_grub 2 1044kiB 525332kiB 524288kiB fat32 boot, esp 3 526336kiB 17303552kiB 16777216kiB boot-pool 4 17303552kiB 488386560kiB 471083008kiB pool
-
Make the new system partitions part of the boot-pool. This is done by attaching them to the existing pool while detaching the USB drive.
$ zpool attach boot-pool sdb3 nvme0n1p3
Wait for resilvering to complete, check progress with
$ zpool status
When resilvering is complete we can detach the USB device.
$ zpool offline boot-pool sdb3 $ zpool detach boot-pool sdb3
Finally add the last drive to create a mirror of the boot-pool.
$ zpool attach boot-pool nvme0n1p3 nvme1n1p3 $ zpool status pool: boot-pool state: ONLINE scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme0n1p3 ONLINE 0 0 0 nvme1n1p3 ONLINE 0 0 0
At this point you can remove the USB device and when the machine is rebooted it will start up from the NVMe devices instead. Check BIOS boot order if it doesn't.
-
Now that the boot-pool is mirrored we want to create a mirror pool using the remaining partitions.
$ zpool create pool1 mirror nvme0n1p4 nvme1n1p4 $ zpool status pool: boot-pool state: ONLINE scan: resilvered 2.78G in 00:00:03 with 0 errors on Wed Oct 27 07:16:56 2021 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme0n1p3 ONLINE 0 0 0 nvme1n1p3 ONLINE 0 0 0 pool: pool1 state: ONLINE config: NAME STATE READ WRITE CKSUM pool1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 nvme0n1p4 ONLINE 0 0 0 nvme1n1p4 ONLINE 0 0 0
But to be able to import it in the Web UI we need to export it.
$ zpool export pool1
-
All done! Import pool1 using the Web UI and start enjoying the additional space.
For TrueNas Scale, 24.10 (Electric Eel)
Missing parts in the initial guide.
Need to use some steps from the deprecated guide.
root
user).parted
to edit partitions.print list
to find your boot device, for me it was/dev/nvme0n1
.select <path to your boot device>
.unit kiB
.print
to get exact info on your boot device current partition status.zfs
) - for me it was17304576kiB
mkpart <new partition name> <last partition end in kiB> 100%
, for me it wasmkpart ssd-pool 17304576kiB 100%
print
to verify (You can change tounit giB
for ease of use). Note your new partition number (for me it was 4)quit
to exitparted
.zpool create <pool name> <path to your boot device>p<your new partition number>
. (For me it waszpool create ssd-pool /dev/nvme0n1p4
).cannot mount '/ssd-pool': failed to create mountpoint: Read-only file system
zpool status
that my pool was created.zpool export <pool name>
to allow Truenas Web UI to see this pool.exit
to exit shell.Some extras on how to create an encrypted pool (too much details, I put here some pointers and commands but I don't have time to add the exact details):
No Encryption:
With Encryption
This will require also to manually copy the key from the shell to the webui when importing the pool
Adding second SSD and mirroring both boot-pool and ssd-pool
/dev
) - for me/dev/nvme1n1
.sfdisk -d <first disk path> | sfdisk <new disk path>
- for mesfdisk -d /dev/nvme0n1 | sfdisk /dev/nvme1n1
.zpool attach <pool name> <partition already in pool> <new partition added to pool>
, for me -zpool attach boot-pool nvme0n1p3 nvme1n1p3
andzpool attach ssd-pool nvme0n1p4 nvme1n1p4
Validation of step 2 & 3 done with
parted
(print list
) andzpool status
respectively.