# How to setup RAID1 with BTRFS on Fedora ## Here we list our disks for future use. ```bash sudo lsblk ``` ```bash NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 8G 0 disk [SWAP] nvme1n1 259:0 0 953.9G 0 disk ├─nvme1n1p1 259:1 0 190M 0 part /boot/efi ├─nvme1n1p3 259:3 0 1G 0 part /boot └─nvme1n1p7 259:7 0 932.6G 0 part └─luks-13f88890-0b52-4080-a3b6-b406d616c659 253:0 0 932.6G 0 crypt /home nvme2n1 259:8 0 931.5G 0 disk nvme0n1 259:10 0 476.9G 0 disk └─nvme0n1p1 259:11 0 476.9G 0 part ``` ### Create second disk partitions using parted & Disks Duplicate the /boot and /boot/efi partition structure of disk one. `sudo parted /dev/nvme2n1` Use `Disks` to create the btrfs with LUKS ## Here we list our disks for future use. ```bash sudo lsblk ``` ```bash NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT zram0 252:0 0 8G 0 disk [SWAP] nvme1n1 259:0 0 953.9G 0 disk ├─nvme1n1p1 259:1 0 190M 0 part /boot/efi ├─nvme1n1p2 259:3 0 1G 0 part /boot └─nvme1n1p3 259:7 0 932.6G 0 part └─luks-13f88890-0b52-4080-a3b6-b406d616c659 253:0 0 932.6G 0 crypt /var/lib/docker/btrfs nvme2n1 259:8 0 931.5G 0 disk ├─nvme2n1p1 259:1 0 190M 0 part ├─nvme2n1p2 259:3 0 1G 0 part └─nvme2n1p3 259:9 0 931.5G 0 part └─luks-429c570c-2743-4cc3-beaa-dfc8facb118c 253:1 0 931.5G 0 crypt nvme0n1 259:10 0 476.9G 0 disk └─nvme0n1p1 259:11 0 476.9G 0 part ``` ### Add second disk to path ```bash sudo btrfs device usage / ``` ```bash /dev/mapper/luks-13f88890-0b52-4080-a3b6-b406d616c659, ID: 1 Device size: 932.57GiB Device slack: 0.00B Data,single: 85.01GiB Metadata,single: 8.01GiB System,single: 4.00MiB Unallocated: 839.55GiB ``` We will have to add the `-f` option to force replacement of the filesystem on the disk if it has an existing filesystem. **This is destructive!** I also takes a long time if there is a lot of data. Be patient. You can follow progress by running the usage command from a different terminal. ```bash sudo btrfs device add -f /dev/mapper/luks-429c570c-2743-4cc3-beaa-dfc8facb118c / sudo btrfs device usage / sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 / sudo btrfs device usage / ``` ### Update crypttab Add to /etc/crypttab: ```bash luks-429c570c-2743-4cc3-beaa-dfc8facb118c UUID=429c570c-2743-4cc3-beaa-dfc8facb118c none discard ``` ### Update Bootloader Now we need to add the luks to grub. Add `rd.luks.uuid=luks-429c570c-2743-4cc3-beaa-dfc8facb118c` to the GRUB_CMDLINE_LINUX value in `/etc/default/grub` Then rebuild grub. Then rebuild initramfs. ```bash sudo -i nano /etc/default/grub grub2-mkconfig -o "$(readlink /etc/grub2.cfg)" dracut --force reboot ``` ### Duplicate /boot to the second disk as a precaution ```bash sudo dd if=/dev/nvme1n1p2 of=/dev/nvme2n1p2 bs=1024 status=progress sudo dd if=/dev/nvme1n1p1 of=/dev/nvme2n1p1 bs=1024 status=progress ``` # How to user Snapper on Fedora 34 Snapper is a snapshot management tool for btrfs. It is available here. http://snapper.io/ ## Here we list our top level subvolumes. If you are using Docker's btrfs volume driver you end up with a lot snapshots which necessitates the grep filter used below. ```bash sudo btrfs subvolume list / | grep "level 5" ``` ```bash ID 256 gen 1987943 top level 5 path home ID 257 gen 1987942 top level 5 path root ``` ## Install snapper This installs snapper and an automatic snapshot before and after each dnf install. ```bash sudo dnf install snapper python-dnf-plugin-snapper ``` ## Setup snapper Here we create a snapper config for the `root` subvolume. ```bash sudo snapper -c root create-config / ``` It is not stored at the top level however, so we will move the subvolume from `/.snapshots` (within the `root` subvolume) to `level 5`. We want it at the top level so we can rollback our root partition and boot into it directly. ```bash sudo btrfs subvolume list / | grep snapshots sudo btrfs subvolume delete /.snapshots ``` Now we will make a new folder within the root path and create a top level subvolume and mount it there. ```bash sudo mkdir /.snapshots sudo mkdir /backups sudo mkdir /mnt/btrfs sudo mount /dev/dm-0 -o subvolid=5 /mnt/btrfs cd /mnt/btrfs sudo btrfs subvolume create snapshots sudo btrfs subvolume create backups cd .. sudo umount /mnt/btrfs sudo rmdir btrfs/ ``` There are now two new top level subvolume. ```bash sudo btrfs subvolume list / | grep "level 5" ``` ```bash ID 256 gen 1987943 top level 5 path home ID 257 gen 1987942 top level 5 path root ID 10963 gen 1987638 top level 5 path snapshots ID 10963 gen 1987638 top level 5 path backups ``` ### Mount the new subvolumes ```bash sudo nano /etc/fstab ``` We add a new lines at the bottom which mounts the new subvolumes. ```bash UUID=2402f445-7fd2-4a8b-8b53-6c27c67fb58d / btrfs subvol=root,compress=zstd:1,x-systemd.device-timeout=0 0 0 UUID=13f88890-0b52-4080-a3b6-b406d616c659 /boot ext4 defaults 1 2 UUID=62A1-49D9 /boot/efi vfat umask=0077,shortname=winnt 0 2 UUID=2402f445-7fd2-4a8b-8b53-6c27c67fb58d /home btrfs subvol=home,compress=zstd:1,x-systemd.device-timeout=0 0 0 UUID=2402f445-7fd2-4a8b-8b53-6c27c67fb58d /.snapshots btrfs subvol=snapshots,x-systemd.device-timeout=0 0 0 UUID=2402f445-7fd2-4a8b-8b53-6c27c67fb58d /backups btrfs subvol=backups,x-systemd.device-timeout=0 0 0 ``` Let's mount it. ```bash sudo mount -a ``` We will make user folder with user permissions. ```bash sudo mkdir /backups/$USER sudo chown $USER:$USER /backups/$USER cd $HOME ln -s /backups/$HOME backups ``` ### Integrate Snapper with Grub to allow `/` rollbacks The default subvolume is just top level. This is not specific enough for our needs. We want to be able to set this explicitly to a subvolume ID. ```bash sudo btrfs subvolume get-default / ``` ```bash ID 5 (FS_TREE) ``` Recall above our `root` subvolume ID was `257` we will set that to our default. ```bash sudo btrfs subvolume set-default 257 / sudo btrfs subvolume get-default / ``` ```bash ID 257 gen 1988138 top level 5 path root ``` Now we need to modify the `Grub` config. ```bash sudo grubby --info=ALL ``` ```bash ... index=2 kernel="/boot/vmlinuz-5.13.14-200.fc34.x86_64" args="ro rootflags=subvol=root rhgb quiet" ... ``` We want to remove the `rootflags=subvol=root` argument so it will honor rollbacks requested by snapper. ```bash sudo grubby --update-kernel=ALL --remove-args="rootflags=subvol=root" ``` ```bash sudo grubby --info=ALL ``` ```bash ... index=2 kernel="/boot/vmlinuz-5.13.14-200.fc34.x86_64" args="ro rhgb quiet" ... ``` All better, so now we reboot. ```bash reboot ``` ### Rollback to a `root` snapshot ```bash sudo snapper ls ``` ```bash # | Type | Pre # | Date | User | Cleanup | | Userdata ---+--------+-------+-----------------+------+---------+----------------------------------------+--------- 0 | single | | | root | | current | 1 | pre | | Sun 18 Sep 2021 | root | number | /bin/dnf -y install ...fc34.x86_64.rpm | 2 | post | 1 | Sun 19 Sep 2021 | root | number | /bin/dnf -y install ...fc34.x86_64.rpm | ``` Oh no, our install broke everything! ```bash sudo snapper --ambit classic rollback 1 reboot ``` Life is good! ## Add automatic snapshots of `home` subvolume We will create a snapper config for /home and give our personal user access since this is a single user PC. ```bash sudo snapper -c home create-config /home sudo snapper -c home set-config SYNC_ACL=yes ALLOW_USERS=$USER sudo snapper list-configs ``` ```bash Config | Subvolume -------+---------- home | /home root | / ``` ### Make our first `home` snapshot Notice `sudo` is not required anymore. ```bash snapper -c home create --description "First Snapshot" snapper -c home ls ``` ```bash # | Type | Pre # | Date | User | Cleanup | Description | Userdata ---+--------+-------+-----------------+------+----------+----------------+--------- 0 | single | | | root | | current | 1 | single | | Sat 18 Sep 2021 | josh | | First Snapshot | ``` ### Setup automatic backups ```bash sudo nano /etc/snapper/configs/root ``` Disable automatic hourly snapshots of `root`. ```bash # create hourly snapshots TIMELINE_CREATE="no" # cleanup hourly snapshots after some time TIMELINE_CLEANUP="not" ``` ```bash sudo nano /etc/snapper/configs/home ``` Configure automatic hourly snapshots of `home`. ```bash # create hourly snapshots TIMELINE_CREATE="yes" # cleanup hourly snapshots after some time TIMELINE_CLEANUP="yes" # limits for timeline cleanup TIMELINE_MIN_AGE="1800" TIMELINE_LIMIT_HOURLY="5" TIMELINE_LIMIT_DAILY="7" TIMELINE_LIMIT_WEEKLY="0" TIMELINE_LIMIT_MONTHLY="1" TIMELINE_LIMIT_YEARLY="0" ``` Turn on timer for automatic snapshots. ```bash sudo systemctl enable --now snapper-timeline.timer sudo systemctl enable --now snapper-cleanup.timer ``` After waiting an hour... ```bash snapper -c home ls ``` ```bash # | Type | Pre # | Date | User | Cleanup | Description | Userdata ---+--------+-------+-----------------+------+----------+----------------+--------- 0 | single | | | root | | current | 1 | single | | Sat 18 Sep 2021 | josh | | First Snapshot | 2 | single | | Sun 18 Sep 2021 | root | timeline | timeline | ```