Skip to content

Instantly share code, notes, and snippets.

@airbreather
Last active December 28, 2024 19:59
Show Gist options
  • Save airbreather/6b92b1515f1e3dcce02662d45c64500a to your computer and use it in GitHub Desktop.
Save airbreather/6b92b1515f1e3dcce02662d45c64500a to your computer and use it in GitHub Desktop.
arch-server-install

My Arch Server Install

Based on https://wiki.archlinux.org/title/Installation_guide, retrieved 2024-01-27.

This file will document any changes or details that I make for my own purpose(s).

Minimum needed to get out of console redirection

Important note: this same root password will be used for the system.

passwd
ip addr

Note the IP address, then remote in from the client:

ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@THE_IP

Get into the chroot

This ignores my zpool, since that's handled already.

Also, specifics are initially being written for the VM that I'm testing this on.

As usual, the compress=zstd option on the first mounted subvolume will apply in practice to all other mounted subvolumes. I'm discriminating based on how I would set it if this weren't the case.

fdisk -l
device=/dev/vda
vared -p 'Root device: ' -r "[$device]" device
echo 'g\nn\n\n\n+1G\nt\n1\nn\n\n\n\nw' | fdisk $device
sync
mkfs.fat -F 32 -n ESP ${device}1
mkfs.btrfs -L MAIN ${device}2
mount ${device}2 /mnt
btrfs subvolume create /mnt/@
btrfs subvolume create /mnt/@home
btrfs subvolume create /mnt/@snapshots
btrfs subvolume create /mnt/@var_log
btrfs subvolume create /mnt/@var_pacman_pkg
umount /mnt
mount -o subvol=@,compress=zstd ${device}2 /mnt
mount --mkdir ${device}1 /mnt/boot
mount --mkdir -o subvol=@home,compress=zstd ${device}2 /mnt/home
mount --mkdir -o subvol=@snapshots ${device}2 /mnt/.snapshots
mount --mkdir -o subvol=@var_log,compress=zstd ${device}2 /mnt/var/log
mount --mkdir -o subvol=@var_pacman_pkg ${device}2 /mnt/var/pacman/pkg
while systemctl show reflector | grep -q ActiveState=activating; do echo Waiting for Reflector to finish...; sleep 1s; done
echo Reflector finished
perl -pi -e 's/^#(?=(?:Color)|(?:ParallelDownloads = \d+)$)//' /etc/pacman.conf
pacstrap -PK /mnt base linux-lts dracut base-devel linux-lts-headers linux-firmware amd-ucode btrfs-progs emacs-nox git man-db man-pages texinfo openssh pacman-contrib dkms zsh devtools
genfstab -L /mnt >> /mnt/etc/fstab
ln -sf ../run/systemd/resolve/stub-resolv.conf /mnt/etc/resolv.conf
cut -f 2 -d: /etc/shadow | head -n1 > /mnt/etc/.root-password
arch-chroot /mnt

Now you're in the chroot

cat >/root/.emacs <<END
(setq make-backup-files nil)
END
ln -sf /usr/share/zoneinfo/America/Detroit /etc/localtime
systemctl enable systemd-timesyncd.service
hwclock --systohc
perl -pi -e 's/#(?=en_US\.UTF-8 UTF-8)//' /etc/locale.gen
locale-gen
cat >/etc/locale.conf <<END
LANG=en_US.UTF-8
END
cat >/etc/hostname <<END
juan
END
cat >/etc/systemd/network/20-wired.network <<END
[Match]
Name=en*

[Network]
DHCP=yes
END
systemctl enable systemd-networkd.service systemd-resolved.service
cat >/etc/dracut.conf.d/myflags.conf << END
uefi="yes"
compress="zstd"
kernel_cmdline="root=LABEL=MAIN rootflags=subvol=@,compress=zstd"
END
for k in /usr/lib/modules/*; do dracut --kver $(basename "$k"); done
bootctl install
cat >/boot/loader/loader.conf <<END
timeout 0
console-mode keep
editor no
END
usermod -p `cat /etc/.root-password` root
useradd -m -G wheel,users -U -s /usr/bin/zsh -p `cat /etc/.root-password` joe 
rm /etc/.root-password
cat >/etc/sudoers.d/00_wheel <<END
%wheel ALL=(ALL:ALL) ALL
END
perl -pi -e 's/(?<=-march=x86-64) /-v4 / ; s/(?<=-mtune=)generic/native/ ; s/^#(RUSTFLAGS="[^"]*)"/$1 -C target-cpu=x86-64-v4"/ ; s/^#(?<prefix>MAKEFLAGS="-j)(\d+)(?<postfix>.*)$/$+{prefix}10$+{postfix}/' /etc/makepkg.conf
systemctl enable sshd.service paccache.timer
mkdir -m 0700 /home/joe/.ssh
cat >/home/joe/.ssh/authorized_keys <<END
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICCWGHWSbfrMaedEPJUZKoHKHKcowy2oKW2PIK8MUJ7P
END
cat >/home/joe/.emacs <<END
(setq make-backup-files nil)
END
touch /home/joe/.zshrc
chown -R joe:joe /home/joe/.ssh /home/joe/.emacs /home/joe/.zshrc
# https://gitlab.archlinux.org/archlinux/arch-install-scripts/-/issues/70
chmod 0644 /etc/pacman.conf
exit

Now you're out of the chroot

umount -R /mnt
reboot

Post-Install

Ideally this should be converted to a script, but I am still building it out, so it'll be broken out into steps for now based on https://wiki.archlinux.org/title/General_recommendations

Resume through SSH

Connect to the server as joe (there should be no password prompt), then:

curl -o install-omz.sh -L https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
cat >install-omz.sh.sha256 <<END
96d90bb5cfd50793f5666db815c5a2b0f209d7e509049d3b22833042640f2676 install-omz.sh
END
sha256sum -c install-omz.sh.sha256 || exit 1
sh install-omz.sh
rm install-omz.sh{,.sha256}
perl -pi -e 's/(?<=ZSH_THEME=")robbyrussell(?=")/strug/ ; s/# (?=(?:HYPHEN_INSENSITIVE="true")|(?:COMPLETION_WAITING_DOTS="true")|(?:DISABLE_MAGIC_FUNCTIONS="true"))//' ~/.zshrc
cat >/home/joe/.oh-my-zsh/custom/prefs.zsh <<END
export EDITOR=/usr/bin/emacs
export DIFFPROG=/usr/bin/meld
setopt appendhistory
setopt INC_APPEND_HISTORY
END

Reconnect as joe one last time.

AUR Helper

ZFS comes from the AUR. I need to be updating ZFS regularly, so I need to hold my nose here.

mkdir -p ~/src/paru-bin
pushd ~/src/paru-bin
git clone https://aur.archlinux.org/paru-bin .
makepkg -si
popd
sudo perl -pi -e 's/#(?=SudoLoop)//' /etc/paru.conf

ZFS

ALWAYS:

paru -S zfs-dkms
sudo mkdir -p /etc/zfs/zfs-list.cache

ONLY DURING TESTING:

for i in {1..5}; do fallocate -l 1G ~/$i.img; done
sudo zpool create nas raidz1 ~/{1,2,3,4,5}.img
for f in joe kristina media shared archipelago forgejo foundryvtt foundryvtt2 archipelago2 factorio20241027
do
    sudo zfs create -o compression=zstd nas/$f
    sudo touch /nas/$f/some-file-owned-by-$f
done
sudo chown -R 1000:1000 /nas/joe
sudo chown -R 1002:1002 /nas/kristina
sudo chown -R 1004:1004 /nas/factorio20241027
sudo chown -R 0:984 /nas/media
sudo chown -R 969:969 /nas/archipelago
sudo chown -R 963:963 /nas/forgejo
sudo chown -R 968:984 /nas/foundryvtt
sudo chown -R 966:966 /nas/foundryvtt2
sudo chown -R 967:967 /nas/archipelago2
sudo mkdir -p /nas/media/jellyfin
sudo chown 971:971 /nas/media/jellyfin

ALWAYS:

sudo systemctl enable --now zfs.target zfs-import.target zfs-import-cache.service zfs-mount.service zfs-zed.service
sudo touch /etc/zfs/zfs-list.cache/nas

Generate a new initramfs on kernel upgrade

Not sure what weird thing dracut-ukify is doing. This is all I need for this setup... which is a lot more code to write here, but it's the simplest I've seen so far. Which probably means that I'm doing it wrong, but *shrugs*.

sudo mkdir /.snapshots/backup-efi
sudo mkdir /.snapshots/root-auto
cat >/tmp/airbreather-runs-dracut-like-this.sh <<'END'
#!/usr/bin/sh

kvers=($(basename -a /usr/lib/modules/*))
for img in /boot/EFI/Linux/linux-*.efi
do
    kver_img=$(basename $img)
    found=0
    for kver in $kvers
    do
        if [[ $kver_img = "linux-$kver-"* ]]
        then
            found=1
            break
        fi

        if [[ $found = 0 ]]
        then
            mv $img /.snapshots/backup-efi/
        fi
    done
done

for kver in $kvers
do
    dracut --force --kver $kver
done
END
chmod 0755 /tmp/airbreather-runs-dracut-like-this.sh
cat >/tmp/snapshot-root.sh <<'END'
#!/usr/bin/sh

# based on https://github.com/vaminakov/btrfs-autosnap
btrfs subvolume snapshot -r / "/.snapshots/root-auto/$(date -u --rfc-3339=ns)"
END
chmod 0755 /tmp/snapshot-root.sh
sudo mv /tmp/airbreather-runs-dracut-like-this.sh /tmp/snapshot-root.sh /usr/local/bin/
cat >/tmp/90-airbreather-installs-dracut-like-this.hook <<END
[Trigger]
Type = Path
Operation = Install
Operation = Upgrade
Target = usr/lib/modules/*/pkgbase
Target = usr/lib/dracut/*
Target = usr/lib/systemd/systemd
Target = usr/lib/systemd/boot/efi/*.efi.stub
Target = usr/src/*/dkms.conf

[Action]
Description = Updating linux images, the airbreather way...
When = PostTransaction
Exec = /usr/local/bin/airbreather-runs-dracut-like-this.sh
NeedsTargets
END
chmod 0644 /tmp/90-airbreather-installs-dracut-like-this.hook
cat >/tmp/01-snapshot-root.hook <<END
[Trigger]
Type = Package
Operation = Install
Operation = Upgrade
Operation = Remove
Target = *

[Action]
Description = Making BTRFS snapshot of the root...
Depends = btrfs-progs
When = PreTransaction
Exec = /usr/local/bin/snapshot-root.sh
AbortOnFail
NeedsTargets
END
chmod 0644 /tmp/01-snapshot-root.hook
sudo mkdir -p /etc/pacman.d/hooks
sudo mv /tmp/90-airbreather-installs-dracut-like-this.hook /tmp/01-snapshot-root.hook /etc/pacman.d/hooks/

Additional users and groups

# Three ways I could have done this:
# 1. renumber the clashes that Arch brings to us out-of-the-box, then chown everything in the base
#    system accordingly
# 2. accept different UID / GID values, then chown everything in the zpool accordingly
# 3. before the migration, on the source side: assign different non-clashing values, then chown
#    everything in the zpool accordingly
#
# I feel like #2 is the safest, assuming that I don't make any stupid mistakes outside the scope of
# this Gist. I don't expect any important parts of the Arch ecosystem to do anything as insane as to
# assume that specific groups have specific GID values, but the instant that I pulled an AUR helper
# into all this, I also opted-in to assuming that any given package might be doing some chaotic
# neutral things to make the underlying software work. It also should result in IDs that fall within
# the appropriate ranges as defined by the **very different** /etc/login.defs files, which *can* be
# treated as in-scope for the Arch ecosystem to make assumptions for routines that have no better
# option. So I add a step that will be obnoxious to reverse — but not impossible by any means — if I
# need to abort partway through.

sudo useradd -G users -U -m kristina

echo '#!/usr/bin/sh' >/home/joe/remap_ids.sh
joe_uid=$(id -u joe)
kristina_uid=$(id -u kristina)

echo find /nas -uid 1002 -exec chown --no-dereference $kristina_uid "'{}'" "';'" >> /home/joe/remap_ids.sh
for old_uid in 198 962 963 965 966 967 968 969 971 994 1000 1003 1004 
do
    echo find /nas -uid $old_uid -exec chown --no-dereference $joe_uid "'{}'" "';'" >> /home/joe/remap_ids.sh
done

joe_gid=$(getent group joe | cut -d: -f3)
users_gid=$(getent group users | cut -d: -f3)
kristina_gid=$(getent group kristina | cut -d: -f3)

echo find /nas -gid 984 -exec chgrp --no-dereference $users_gid "'{}'" "';'" >> /home/joe/remap_ids.sh
echo find /nas -gid 1002 -exec chgrp --no-dereference $kristina_gid "'{}'" "';'" >> /home/joe/remap_ids.sh
for old_gid in 198 961 962 963 965 966 967 968 969 971 994 1000 1004
do
    echo find /nas -gid $old_gid -exec chgrp --no-dereference $joe_gid "'{}'" "';'" >> /home/joe/remap_ids.sh
done
echo
echo
echo
echo 'A script has been created at /home/joe/remap_ids.sh that will remap the IDs in the zpool.'
echo 'This is a DESTRUCTIVE operation, so I am taking full precautions not to run it automatically.'
echo 'Examine it before running it (which must be done as root). Good luck.'

Notes for next rebuild

The main page should get me to a base system in a reasonable state that's not very useful. This adds the services back on.

Podman

paru -S --asdeps crun fuse-overlayfs
paru -S podman

cat >/tmp/10-unqualified-search-registries.conf <<<'unqualified-search-registries = ["docker.io"]'
chmod 0644 /tmp/10-unqualified-search-registries.conf
sudo mv /tmp/10-unqualified-search-registries.conf /etc/containers/registries.conf.d/

# pull images at once, so I can AFK while the downloading happens.
podman pull docker.io/certbot/certbot:latest

# (more images here)

mkdir -p $HOME/.config/containers/systemd
mkdir $HOME/letsencrypt
loginctl enable-linger

Certbot

(read -s 'CLOUDFLARE_TOKEN?Cloudflare API token: '; echo "dns_cloudflare_api_token=$CLOUDFLARE_TOKEN" | podman secret create --replace cloudflare_credentials -)
cat >$HOME/.config/containers/systemd/certbot-var-lib-letsencrypt.volume <<'END'
[Unit]
Description=Certbot's /var/lib/letsencrypt volume

[Volume]
VolumeName=certbot-var-lib-letsencrypt
END
cat >$HOME/.config/containers/systemd/certbot-var-log-letsencrypt.volume <<'END'
[Unit]
Description=Certbot's /var/log/letsencrypt volume

[Volume]
VolumeName=certbot-var-log-letsencrypt
END

# initial certs require more command-line params
for domain in 'startcodon.com' 'airbreather.party'; do
    podman run \
        --rm \
        -v "$HOME/letsencrypt:/etc/letsencrypt:Z" \
        -v "certbot-var-log-letsencrypt:/var/log/letsencrypt:Z" \
        -v "certbot-var-lib-letsencrypt:/var/lib/letsencrypt:Z" \
        --secret cloudflare_credentials,mode=0600 \
        docker.io/certbot/dns-cloudflare:latest \
            certonly \
            --email '[email protected]' \
            --agree-tos \
            --non-interactive \
            --dns-cloudflare \
            --dns-cloudflare-credentials /run/secrets/cloudflare_credentials \
            --dns-cloudflare-propagation-seconds 30 \
            -d $domain \
            -d '*.'$domain
done

cat >$HOME/.config/containers/systemd/certbot-renew.container <<'END'
[Unit]
Description=Renew certificates using certbot

[Container]
Image=docker.io/certbot/dns-cloudflare:latest
ContainerName=certbot
Volume=%h/letsencrypt:/etc/letsencrypt:Z
Volume=certbot-var-log-letsencrypt.volume:/var/log/letsencrypt:Z
Volume=certbot-var-lib-letsencrypt.volume:/var/lib/letsencrypt:Z
AutoUpdate=registry
Secret=cloudflare_credentials,mode=0600
Exec=renew
END

Notes from new stuff

This file is intended to contain notes for things I put in here after the initial install,

krb5

Totally failed to get this up and running, even just for NFS.

I never got it working in the end, and I tried to clean it up, but I have a guess that there are things scattered around from my various attempts.

Forgejo

The main repo has it now.

location / {
    client_max_body_size 512M;
    proxy_pass http://localhost:3000;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

Docker

UPDATED: learn how to use Podman + Quadlets. that is SO much more sensible than any of this garbage

Initially for the Forgejo runner, but we all know how all the kool kidz like to containerize everything because they can't be bothered to deal with dependencies.

Creating group 'docker' with GID 962

Adding joe and forgejo to the docker group. Probably just need to keep forgejo in there for this.

Installing docker-compose and docker-buildx. Don't forget to docker buildx install.

read -s 'registrationToken?Forgejo runner registration token: '
echo
cat >/tmp/docker-compose.yml <<'EOF'
# Copyright 2023 The Forgejo Authors.
# SPDX-License-Identifier: MIT

version: "3"

services:
  runner-register:
    image: code.forgejo.org/forgejo/runner:3.3.0
    user: 0:0
    network_mode: host
    volumes:
      - /nas/forgejo/runner-data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    command:
      - /bin/sh
      - -c
      - |
        forgejo-runner register --no-interactive --token REGISTRATION_TOKEN_HERE --name runner --instance https://git.startcodon.com
        forgejo-runner generate-config > config.yml
        sed -i -e "s|network: .*|network: host|" config.yml
        sed -i -e "s|labels: \[\]|labels: \[\"docker:docker://alpine:3.19\"\]|" config.yml ;

  runner-daemon:
    image: code.forgejo.org/forgejo/runner:3.3.0
    depends_on:
      runner-register:
        condition: service_completed_successfully
    user: "FORGEJO_UID_HERE:DOCKER_GID_HERE"
    volumes:
      - /nas/forgejo/runner-data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    command: forgejo-runner --config config.yml daemon
EOF
sed -i s/REGISTRATION_TOKEN_HERE/$registrationToken/g /tmp/docker-compose.yml
sed -i s/FORGEJO_UID_HERE/$(id -u forgejo)/g /tmp/docker-compose.yml
sed -i s/DOCKER_GID_HERE/$(getent group docker | cut -d: -f3)/g /tmp/docker-compose.yml
sudo mv /tmp/docker-compose.yml /etc/forgejo/
sudo chown forgejo:forgejo /etc/forgejo/docker-compose.yml
sudo mkdir -p /nas/forgejo/runner-data
sudo chown -R forgejo:forgejo /nas/forgejo/runner-data
cat >/tmp/forgejo-runner.service <<EOF
[Unit]
Description=Forgejo runner
Requires=docker.service
After=docker.service

[Service]
Type=simple
Restart=always
User=forgejo
Group=docker
WorkingDirectory=/etc/forgejo
ExecStartPre=docker compose -f docker-compose.yml stop
ExecStart=docker compose -f docker-compose.yml up
ExecStop=docker compose -f docker-compose.yml stop

[Install]
WantedBy=multi-user.target
EOF
sudo chown root:root /tmp/forgejo-runner.service
sudo mv /tmp/forgejo-runner.service /etc/systemd/system/
cat >/tmp/Dockerfile.forgejo-runner <<'EOF'
FROM node:lts-bookworm

RUN apt-get update \
  && apt-get install -y \
    zstd zip \
  && rm -rf /var/lib/apt/lists/*
EOF
sudo chown root:root /tmp/Dockerfile.forgejo-runner
sudo mv /tmp/Dockerfile.forgejo-runner /etc/forgejo/
sudo docker build /etc/forgejo --file /etc/forgejo/Dockerfile.forgejo-runner --tag airbreather/forgejo-runner
cat >/tmp/Dockerfile.dotnet-sdk-8.0-forgejo-runner <<'EOF'
FROM airbreather/forgejo-runner

RUN wget https://packages.microsoft.com/config/debian/12/packages-microsoft-prod.deb -O packages-microsoft-prod.deb \
  && dpkg -i packages-microsoft-prod.deb \
  && rm packages-microsoft-prod.deb \
  && apt-get update \
  && apt-get install -y dotnet-sdk-8.0 \
  && rm -rf /var/lib/apt/lists/* \
  # Trigger first run experience by running arbitrary cmd
  && dotnet help
EOF
sudo chown root:root /tmp/Dockerfile.dotnet-sdk-8.0-forgejo-runner
sudo mv /tmp/Dockerfile.dotnet-sdk-8.0-forgejo-runner /etc/forgejo/
sudo docker build /etc/forgejo --file /etc/forgejo/Dockerfile.dotnet-sdk-8.0-forgejo-runner --tag airbreather/dotnet-sdk-8.0-forgejo-runner

Stalwart

Via Docker, because I need to get better at this thing.

DO THIS ONCE

sudo zfs create nas/mail
sudo useradd -r -U -m -d /nas/mail/stalwart/home stalwart
sudo mkdir -p /nas/mail/stalwart/data
sudo chown -R stalwart:stalwart /nas/mail/stalwart

This must be done for every version update... here's what I did for 0.6.0:

sudo docker pull stalwartlabs/mail-server:v0.6.0
sudo docker run -d -ti \
    -p 8969:443 \
    -p 25:25 \
    -p 587:587 \
    -p 465:465 \
    -p 143:143 \
    -p 993:993 \
    -p 4190:4190 \
    -v /nas/mail/stalwart/data:/opt/stalwart-mail \
    -v /etc/letsencrypt/live/startcodon.com/fullchain.pem:/certs/fullchain.pem \
    -v /etc/letsencrypt/live/startcodon.com/privkey.pem:/certs/privkey.pem \
    --name stalwart-mail \
    stalwartlabs/mail-server:v0.6.0
sudo docker exec -it stalwart-mail /bin/sh /usr/local/bin/configure.sh

Defaults for everything except domain stuff, configure domain and DNS as appropriate (startcodon.com main, mail.startcodon.com for the server hostname). Then...

sudo rm /nas/mail/stalwart/data/etc/certs/mail.startcodon.com/{fullchain,privkey}.pem
sudo ln -s /certs/fullchain.pem /nas/mail/stalwart/data/etc/certs/mail.startcodon.com/fullchain.pem
sudo ln -s /certs/privkey.pem /nas/mail/stalwart/data/etc/certs/mail.startcodon.com/privkey.pem
sudo docker start stalwart-mail

For whatever reason, I had to stop the container and restart it before the certs would get used.

Mail was getting flagged as spam (of course), but I did notice that the DMARC copypasta included p=none... setting that to p=reject at least allowed Proton to accept the e-mails without fear, so that's enough for me.

Updating:

  • docker pull the latest version

  • READ THE UPGRADE MANUAL, currently at https://github.com/stalwartlabs/mail-server/blob/main/UPGRADING.md (but you know how that do)

    • 0.6 to 0.7 is special, it looks like ONLY the web admin setup is documented (no configure script anymore, and the old configure script tries to download from a location that currently hits 404). wow.
  • stop the old container (docker container ls to get the ID)

  • remove the old container by that same ID

  • probably the same docker run command as above (with the version number replaced), but double-check the latest docs.

Matrix

matrix-synapse:

Creating group 'synapse' with GID 198.
Creating user 'synapse' (Matrix Synapse user) with UID 198 and GID 198

then run:

sudo zfs create nas/synapse
cd /nas/synapse
sudo chown -R synapse:synapse .
sudo -u synapse python -m synapse.app.homeserver --server-name matrix.startcodon.com --config-path /etc/synapse/homeserver.yaml --generate-config --report-stats=yes
cd
sudo chmod 0700 /nas/synapse

# just in case some guide(s) expect to see it at /var/lib/synapse...
sudo rm -rf /var/lib/synapse
sudo ln -s /nas/synapse /var/lib/synapse

add the reverse proxy to nginx config, forward port 8448, then you can start / enable synapse.service

2024-03-26: disabled synapse, it's using too much CPU idly and I'm not actually using it really.

Update 2024-03-16

Finally got around to updating the pacman package. makepkg.conf comes with several changes, which I've brought in mostly as-is, except:

  1. 90bf367e brought the system makepkg.conf in line with the version from devtools (permalink to the current latest version as I type this). This is mostly sane, with one major exception: --ultra -20 makes sense for what devtools is there to achieve, but on a user's system it's borderline ludicrous to spend so much CPU power to shrink the package files by so little relative to, say, -10.

    • Even -10 seems a little bit on the wild side for a GENERAL-PURPOSE BASELINE IN A MAINSTREAM LINUX DISTRIBUTION, but I've looked at enough benchmarks and tested out enough on my own hardware to come to the conclusion that -10 is a pretty decent tradeoff between speed and size, such that I feel comfortable going to at least that level wherever I care even a little.
    • And it's not completely unwarranted for even the baseline config to use -10, considering the context of what's going on around that flag: this is makepkg.conf, after all, so it's quite likely that this will never significantly impact users on hardware that would prefer a more conservative compression level. But --ultra -20 is ludicrous.

Plex

Removed, in favor of Jellyfin. Requested my account deleted, removed all devices, everything.

  • deleted the plex user and corresponding group

  • deleted /nas/media/junk/PlexCacheThing

  • removed everything with the old plex user's UID or its old group's GID using the following commands to see what those were (it was pretty much just /var/lib/plex and some temporary files that stopping the service didn't quite clean up perfectly, I think):

    sudo find / -path "/.snapshots/*" -prune -o -uid 970 -print >plex-uid
    sudo find / -path "/.snapshots/*" -prune -o -gid 970 -print >plex-gid

Pruned for next rebuild

I've already updated the "arch-server-install.md" file in this gist to slim stuff down in preparation for the upcoming rebuild.

Intent is to use containers for practically everything, so I've trimmed things out:

NFS

sudo systemctl enable --now nfsv4-server.service zfs-share.service
sudo zfs set sharenfs=on nas

Services pre-configured on zpool

cat >/tmp/foundryvtt.service <<END
[Unit]
Description=Foundry VTT

[Service]
Type=simple
ExecStart=node /nas/foundryvtt/app/resources/app/main.js --dataPath=/nas/foundryvtt/data
Restart=on-failure
User=foundryvtt

[Install]
WantedBy=multi-user.target
END
cat >/tmp/foundryvtt2.service <<END
[Unit]
Description=Foundry VTT (Second Instance)

[Service]
Type=simple
ExecStart=node /nas/foundryvtt2/app/resources/app/main.js --dataPath=/nas/foundryvtt2/data
Restart=on-failure
User=foundryvtt2

[Install]
WantedBy=multi-user.target
END

sudo mv /tmp/{foundryvtt{,2},archipelago{,2}}.service /etc/systemd/system/
read -s '_ignore?About to enable the custom services that will only run in "go mode". This is your chance to Ctrl-C out.'
echo
sudo systemctl enable --now {foundryvtt{,2},archipelago{,2}}.service

Notes from actually doing it

  • can't completely wipe /dev/sdc because /dev/sdc3 is part of the zpool

  • /dev/sdc1 is boot, /dev/sdc2 is root

  • booted into BIOS instead of UEFI. noticed quickly

  • saw the boot looping issue again. updated BIOS to 3.50, boot looping went away.

  • needed to change it from /dev/enp1s0 to /dev/enp2s0

  • YOLOing certbot into not-test-mode. worked fine. probably owing to all the testing that I did when it was in test mode.

  • need to handle smb users after all is done. done.

  • remap_ids.sh doesn't pass -h / --no-dereference, so it never changes permissions of links. did it manually this time.

  • need to add ACL entries so the archipelago user can create encrypted web socket connections. done.

  • need to add a certbot 'deploy' renewal hook to add the ACL entry for the archipelago user. done:

    • /etc/letsencrypt/renewal-hooks/deploy/01-archipelago-acl.sh

      #!/bin/sh
      setfacl -m "u:archipelago:r" "$(realpath $RENEWED_LINEAGE/privkey.pem)"
  • Arch made it easier for me to run archipelago in a venv than the way I was running it before. this is a very good thing.

  • NFS functionally means Kerberos... but I can use it maybe unfunctionally for a while.

  • added a pacman hook to take a snapshot of the "/" subvolume at PreTransaction time:

    • all based on https://github.com/vaminakov/btrfs-autosnap

    • /etc/pacman.d/hooks/01-snapshot-root.hook

      [Trigger]
      Type = Package
      Operation = Install
      Operation = Upgrade
      Operation = Remove
      Target = *
      
      [Action]
      Description = Making BTRFS snapshot of the root...
      Depends = btrfs-progs
      When = PreTransaction
      Exec = /usr/local/bin/snapshot-root.sh
      AbortOnFail
      NeedsTargets
    • /usr/local/bin/snapshot-root.sh

      #!/bin/sh
      
      # based on https://github.com/vaminakov/btrfs-autosnap
      btrfs subvolume snapshot -r / "/.snapshots/root-auto/$(date -u --rfc-3339=ns)"
  • removal hook for dracut is wrong because of when it runs... just deleting it for now. also edited the inline for /usr/local/bin/airbreather-runs-dracut-like-this.sh above, now that I've seen how it handles an actual update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment