You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
zpool create -f -d -m none -o ashift=12 -O atime=off -o feature@lz4_compress=enabled pool1 /dev/nvme1n1
zpool create <options> <poolname> <vdevices...>
-f
Forces use of vdevs, even if they appear in use or specify a conflicting replication level. Not all devices can be overridden in this manner.
-d
Do not enable any features on the new pool. Individual features can be enabled by setting their corresponding properties to enabled with -o. See zpool-features(7) for details about feature properties.
-m mountpoint
Sets the mount point for the root dataset. The default mount point is /pool or altroot/pool if altroot is specified. The mount point must be an absolute path, legacy, or none. For more information on dataset mount points, see zfsprops(7).
-o property=value
Sets the given pool properties. See zpoolprops(7) for a list of valid properties that can be set.
-o feature@feature=value
Sets the given pool feature. See the zpool-features(7) section for a list of valid features that can be set. Value can be either disabled or enabled.
-O file-system-property=value
Sets the given file system properties in the root file system of the pool. See zfsprops(7) for a list of valid properties that can be set.
Some Settings:
ashift 12 4KiB block size
recordsize 128Kib Standard usage (mixture of file sizes)
atime Keeps a trace of the files access time, which is of no use. You can disable it with:
compression feature, when enabled, compresses data before it is written out to disk (as the copy-on-write nature of ZFS makes it easier to offer inline compression).
Default compression (currently lz4)
zpool commands
zpool list
zpool status
Create the file system "vmsdisk" on "pool1" and set the mount point
zfs create pool1/vmsdisks
zfs set mountpoint=/vmsdisks pool1/vmsdisks
Now... the file system is ready to be used !!!!
Get some information about the file system
zfs get compression
zfs get atime
zfs get mountpoint
mount | grep zfs
zfs list -t all
cat /etc/default/zfs
Automatic snapshots using zfsbud
# https://github.com/gbytedev/zfsbud
#
# Download zfsbud.sh, make it executable, configure it
cd /root
wget https://raw.githubusercontent.com/gbytedev/zfsbud/refs/heads/master/zfsbud.sh
chmod +x zfsbud.sh
wget https://raw.githubusercontent.com/gbytedev/zfsbud/refs/heads/master/default.zfsbud.conf
mv default.zfsbud.conf zfsbud.conf
# How to configure ZFSBud
vi zfsbud.conf
# Snapshot retention policy
# Apart from the below, the last snapshot is always kept, as well as the most recent common snapshot (in case the `--send|-s` flag is passed).
# The number of hourly snapshots to keep (one per hour, starting with this hour and going backward).
hourly=20
# The number of daily snapshots to keep.
daily=10
# The number of weekly (Sunday) snapshots to keep.
weekly=4
# The number of snapshots of first sundays of every month to keep.
monthly=0
# The number of snapshots of first sundays of the year to keep.
yearly=0
# Other settings
# This can be overriden during runtime with --snapshot-prefix|-p
# Change this to adjust to your servername or dataset name
default_snapshot_prefix=myservername_
add task to cron
crontab -e
# take a snapshot every 10 minutes, remove old ones, keep newest acording to /root/zfsbud.conf
*/10 * * * * /root/zfsbud.sh --create-snapshot --remove-old pool1/vmsdisks
List snapshots
zfs list -t all
zpool commands
zpool status
zpool list
zpool iostat
zpool history
# zpool example commands
root@server:~# zpool status
pool: pool1
state: ONLINE
scan: scrub repaired 0B in 00:08:21 with 0 errors on Sun Feb 9 00:32:22 2025
config:
NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
nvme1n1 ONLINE 0 0 0
errors: No known data errors
root@server:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool1 416G 255G 161G - - 50% 61% 1.00x ONLINE -
root@server:~# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
pool1 255G 161G 5 43 481K 1.23M