This is a collection of personal notes on self-hosted home lab administration. This reference assumes the underlying OS is Ubuntu Server 22.04.
It is recommended to use lsblk
command to get information about block devices rather than blkid
. lsblk
provides more information, better control on output formatting and it does not require root permission to get actual information.
lsblk -o name,rm,ro,size,uuid,fstype,mountpoint
Assuming the target device is /dev/sdb
:
sudo fdisk /dev/sdb
After fdisk
is started, press g
to create a new GPT partition table (fdisk
now supports creating GPT partition tables), and then n
to create a new partition. Press w
to confirm the changes and write them to the device.
First, use lsblk
to determine the file name of the partition. Assuming the target partition is /dev/sdb1
, the following command will perform a full formatting including checking errors for each block:
sudo mkfs.ext4 -c /dev/sdb1
By default, 5% of the disk space will be reserved for newly created EXT4 partitions. Re-adjust the percentage to 1%:
sudo tune2fs -m 1 /dev/sdb1
Assuming the UUID of the new partition is c1c23066-f5db-4f19-a2ae-9d4be1b485c5
and the target mounting point is /media
, add the following line to the end of /etc/fstab
and reboot:
UUID=c1c23066-f5db-4f19-a2ae-9d4be1b485c5 /media ext4 defaults,errors=remount-ro 0 2
It is crucial to use UUID instead of the device file name as the later may change when new hard drives are added.
List all block devices using lsblk
to find out the device name of the failed drive and partition. Unmount the partition first. It is best that the device or partition to be rescued is not mounted at all, not even read-only.
Use the following command:
sudo hdparm -i /dev/sda
Install ddrescue
first if it does not exist in the system yet:
sudo apt-get install gddrescue
To rescue an ext4 partition in /dev/sda1
to /dev/sdb1
, it is necessary to create the sdb1
partition with fdisk
first. sdb1
should be of appropriate type and size.
sudo ddrescue -f -n /dev/sda1 /dev/sdb1 mapfile
sudo ddrescue -d -f -r3 /dev/sda1 /dev/sdb1 mapfile
The -f
option forces overwrite of sdb1
. Needed because sdb1
is not a regular file, but a partition. The -n
option skips the scraping phase, avoiding spending a lot of time trying to rescue the most difficult parts of the file for the first attempt.
For the second attempt, the -d
option tells ddrescue
to use direct disk access and ignore the kernel cache. The -r3
option tells ddrescue
to retry bad sectors 3 times before giving up.
Check any file system error in sdb1
using:
sudo fsck -p /dev/sdb1
Read rescued files from /mnt
:
sudo mount -t ext4 -o ro /dev/sdb1 /mnt
In case the rescue drive is larger than the failed drive and there is free space left with the rescued partition, the unmounted partition can be resized using:
sudo resize2fs /dev/sdb1
Installing libpam-google-authenticator
sudo apt-get install libpam-google-authenticator
Once Google's PAM is installed, run google-authenticator
to generate a TOTP key for the user you want to add a second factor to. This key is generated on a user by user basis, not system wide.
Make sure the following line exists in /etc/pam.d/sshd
:
auth required pam_google_authenticator.so nullok
Then create a new file /etc/ssh/sshd_config.d/00-auth.conf
with the following contents:
PasswordAuthentication no
KbdInteractiveAuthentication yes
Restart SSH daemon afterwards.
Update the apt
package index and install packages to allow apt
to use a repository over HTTPS:
sudo apt-get update
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
Add Docker’s official GPG key:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Use the following command to set up the repository:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Update the apt
package index and install Docker Engine, containerd, and Docker Compose:
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
The Docker daemon binds to a Unix socket, not a TCP port. By default it’s the root
user that owns the Unix socket, and other users can only access it using sudo. The Docker daemon always runs as the root
user.
To avoid having to preface the docker command with sudo
, create a Unix group called docker
and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker
group.
sudo groupadd docker
sudo usermod -aG docker $USER
(Optionally) create a system user for volume access:
sudo useradd -r -s /bin/false dockeruser
First, create the volume that Portainer Server will use to store its database:
docker volume create portainer_data
Pull the Portainer CE image from the official repository. Then build a container and run it:
docker run -d \
--name portainer \
--restart=always \
-p 8000:8000 \
-p 9443:9443 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
Run the following command to deploy the Portainer Agent:
docker run -d \
--name portainer_agent \
--restart=always \
-p 9001:9001 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
portainer/agent:latest
Stop old upstart
service:
sudo stop deluged
Create a new file named deluged.service
under /etc/systemd/system
with the following contents:
[Unit]
Description=Deluge Daemon
Documentation=man:deluged
After=network-online.target
[Service]
Type=simple
User=debian-deluged
Group=debian-deluged
UMask=002
ExecStart=/usr/bin/deluged -d -c /var/lib/deluged/config -l /var/log/deluged/daemon.log -L warning
Restart=on-failure
TimeoutStopSec=300
[Install]
WantedBy=multi-user.target
Eable the Deluge daemon on boot and start the service through systemd
by typing:
sudo systemctl enable deluged.service && systemctl start deluged.service
Create a named volume for Plex configuration and database files:
docker volume create name=plex_config
Copy the existing local Plex configuation and database files to the Docker volume:
docker run -d --rm -v /var/lib/plexmediaserver:/source -v plex_config:/config ubuntu cp -a /source/. /config/
Pull the Plex image from the official repository (for the first time) and start the container:
docker run \
-d \
--name plex \
--restart=unless-stopped \
-p 32400:32400/tcp \
-p 3005:3005/tcp \
-p 8324:8324/tcp \
-p 32469:32469/tcp \
-p 1900:1900/udp \
-p 32410:32410/udp -p 32412:32412/udp \
-p 32413:32413/udp -p 32414:32414/udp \
-e PLEX_UID=$(id -u dockeruser) \
-e PLEX_GID=$(id -g dockeruser) \
-e TZ=America/Chicago \
-e PLEX_CLAIM="<claimToken>" \
-e ADVERTISE_IP="http://192.168.1.6:32400/" \
-h $(hostname) \
-v plex_config:/config \
-v /transcode:/transcode \
-v /media:/media:ro \
plexinc/pms-docker
Obtain the claim token through: https://www.plex.tv/claim
Create a named volume for Tautulli configuration and database files:
docker volume create --name=plexpy_config
Pull the Tautulli image from the official repository (for the first time). Then build a container and run it:
docker run -d \
--name=plexpy \
--restart=unless-stopped \
-v plexpy_config:/config \
-e PUID=$(id -u dockeruser) \
-e PGID=$(id -g dockeruser) \
-e TZ=America/Chicago \
-p 8181:8181 \
ghcr.io/tautulli/tautulli
To update the container:
docker stop plexpy
docker rm plexpy
docker pull ghcr.io/tautulli/tautulli
Then rerun the same docker run
command.
Tautulli is unable to detect named Docker volume mounts so there will be a warning of the Docker volume mount not configured properly. It can be ignored.
Create named volumes for Jellyfin configuration and cache files:
docker volume create --name=jellyfin_config
docker volume create --name=jellyfin_cache
Pull the official Jellyfin image. Build a container from the image and run it:
docker run -d \
--name=jellyfin \
--restart=unless-stopped \
-v jellyfin_config:/config \
-v jellyfin_cache:/cache \
-v /media:/media:ro \
-p 8096:8096 -p 8920:8920 \
-p 1900:1900/udp -p 7359:7359/udp \
--user $(id -u dockeruser):$(id -g dockeruser) \
jellyfin/jellyfin
Jellyfin binds to the following static ports:
Port/Protocol | Function | Configurable |
---|---|---|
8096/tcp | Used by default for HTTP traffic. | Yes |
8920/tcp | Used by default for HTTPS traffic. | Yes |
1900/udp | Used for service auto-discovery. DLNA also uses this port and is required to be in the local subnet. | No |
7359/udp | Used for client auto-discovery. A broadcast message to this port with Who is JellyfinServer? will get a JSON response that includes the server address, ID, and name. |
No |
To use hardware acceleration in Docker, the devices must be passed to the container. To see what video devices are available, run sudo lshw -c video
. VA-API, which is a Video Acceleration API that supports Intel iGPU, requires the rendor
or video
group added to the docker permissions, as shown below:
docker run -d \
--name=jellyfin \
--restart=unless-stopped \
-v jellyfin_config:/config \
-v jellyfin_cache:/cache \
-v /media:/media:ro \
--device /dev/dri/renderD128:/dev/dri/renderD128 \
--device /dev/dri/card0:/dev/dri/card0 \
-p 8096:8096 -p 8920:8920 \
-p 1900:1900/udp -p 7359:7359/udp \
--user $(id -u dockeruser):$(id -g dockeruser) \
--group-add="$(getent group video | cut -d: -f3)" \
jellyfin/jellyfin
Create named volumes for AdGuard Home configuration and data files:
docker volume create --name=adguard_data
docker volume create --name=adguard_config
Pull the AdGuard Home image from the official repository (for the first time) and start the container:
docker run -d \
--name=adguardhome \
--restart=unless-stopped \
-v adguard_data:/opt/adguardhome/work \
-v adguard_config:/opt/adguardhome/conf \
-p 192.168.1.6:53:53/tcp -p 192.168.1.6:53:53/udp \
-p 67:67/udp -p 68:68/udp \
-p 8080:80/tcp -p 443:443/tcp -p 443:443/udp -p 3000:3000/tcp \
-p 853:853/tcp \
-p 784:784/udp -p 853:853/udp -p 8853:8853/udp \
-p 5443:5443/tcp -p 5443:5443/udp \
adguard/adguardhome
Generate a random long password:
openssl rand -base64 16
Deploy shadowsocks-libev
through Docker
docker run -d \
--name=shadowsocks \
--restart=unless-stopped \
-p 8388:8388 -p 8388:8388/udp \
-e PASSWORD="<generatedPassword>" \
-e METHOD="chacha20-ietf-poly1305" \
-e TZ=America/Chicago \
shadowsocks/shadowsocks-libev
Create a Docker volume vaultwarden_data
docker volume create --name=vaultwarden_data
Deploy Vaultwarden through Docker
docker run -d \
--name=vaultwarden
--restart=unless-stopped
-v vaultwarden_data:/data \
-p 8081:80 \
vaultwarden/server:latest
Create two new directories /var/containers/ngnix-proxy-manager/data
and /var/containers/ngnix-proxy-manager/letsencrypt
Create the following docker-compose.yml
file
version: '2.4'
networks:
default:
name: milkyway
external: true
services:
nginx-proxy-manager:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
restart: unless-stopped
ports:
# Public HTTP port
- "80:80/tcp"
# Public HTTPS port
- "443:443/tcp"
# Admin UI
- "81:81/tcp"
volumes:
- /var/containers/ngnix-proxy-manager/data:/data
- /var/containers/ngnix-proxy-manager/letsencrypt:/etc/letsencrypt
Under the same directory, execute docker compose up -d
. The admin UI for Nginx Proxy Manager can be accessed through the 81 port.
Create a new file named smb.conf
under /etc/samba
with the following lines:
[global]
# Browsing/identification
netbios name = LEVIATHAN
workgroup = CLUSTER
server string = Leviathan Relay
name resolve order = bcast host wins
# Networking
interfaces = lo enp6s0
bind interfaces only = yes
hosts allow = 192.168.1. 10.8.0. localhost
# Debug logging information
log level = 2
logging = syslog@1 file
log file = /var/log/samba/log.%m
max log size = 1000
debug timestamp = yes
panic action = /usr/share/samba/panic-action %d
# Authentication
security = user
encrypt passwords = yes
invalid users = root daemon bin sys sync mail news uucp
# Disk sharing
mangled names = no
nt acl support = no
To share /media
through Samba, add the following lines to /etc/samba/smb.conf
:
[media]
path = /media
hide files = /cdrom/lost+found/
comment = NAS Hard Drive
public = no
browsable = yes
writable = yes
create mask = 0664
force create mode = 0664
directory mask = 2775
force directory mode = 2775
Here, newly created files would have -rw-rw-r--
permission while new directories would have drwxrwsr-x
permission. Having the writing permission at the group level is useful because this allows a normal user to gain access to the hard drive through SSH as long as it is assigned to the same user group. Setting the setgid
bit in the group permission ensures new files (even created by a different user) under the directory would automatically inherit the same group name.
Restart the Samba service:
sudo systemctl restart smbd.service
Samba authenticates its users through its local database. To add the existing user to Samba or create a new one:
sudo smbpasswd -a $USER
Install WireGuard on the host
sudo apt update
sudo apt-get install wireguard
Generate a private key for the host and save it in a secure location only readable by root
:
wg genkey | sudo tee /etc/wireguard/private.key
sudo chmod 600 /etc/wireguard/private.key
Create a public key based on the private key:
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
Repeat the same process on a client and obtain a separate pair of private and public keys.
Create a configuration file /etc/wireguard/wg0.conf
on the host:
[Interface]
PrivateKey = <Private Key of the Host>
Address = 10.8.0.1/32
ListenPort = 51820
[Peer]
PublicKey = <Public Key of the Client>
AllowedIPs = 10.8.0.2/32
Set the file owner to root
and permission to 600
.
Start up WireGuard on the host:
sudo systemctl enable [email protected]
sudo systemctl start [email protected]
Open UDP port 51820
on the host and allow packets through the wg0
interface (using OpenSSH as an example):
sudo ufw allow 51820/udp comment 'WireGuard'
sudo ufw allow in on wg0 to any app OpenSSH
Next, create a configuration file for the client:
[Interface]
PrivateKey = <Private Key of the Client>
Address = 10.8.0.2/32
ListenPort = 51821
[Peer]
PublicKey = <Public Key of the Host>
Endpoint = <Host IP Address>:51820
AllowedIPs = 10.8.0.0/24, 192.168.1.0/24
Update the configuration file on the host to masquerade forwarded packets through enp6s0
:
[Interface]
PrivateKey = <Private Key of the Host>
Address = 10.8.0.1/32
ListenPort = 51820
PreUp = sysctl -w net.ipv4.ip_forward=1
PreUp = iptables -t nat -A POSTROUTING -o enp6s0 -j MASQUERADE
PostUp = iptables -t nat -D POSTROUTING -o enp6s0 -j MASQUERADE
[Peer]
PublicKey = <Public Key of the Client>
AllowedIPs = 10.8.0.2/32
Allow packets forwarded from wg0
interface to enp6s0
through firewall on the host:
sudo ufw route allow in on wg0 out on enp6s0
Update the configuration on the client to pass through all connections through WireGuard:
[Interface]
PrivateKey = <Private Key of the Client>
Address = 10.8.0.2/32
ListenPort = 51821
[Peer]
PublicKey = <Public Key of the Host>
Endpoint = <Host IP Address>:51820
AllowedIPs = 0.0.0.0/0, ::/0
sudo netstat -putln
Check status of UFW:
sudo ufw status
One of the things that would make setting up any firewall easier is to define some default rules for allowing and denying connections. The defaults of UFW are to deny all incoming connections and allow all outgoing connections:
sudo ufw default deny incoming
sudo ufw default allow outgoing
The following command allows a UDP connection from any address (public or local) to port 51820 for WireGuard.
sudo ufw allow 51820/udp comment 'WireGuard'
It is possible to specify port ranges. To allow ports 6881 through 6889 for Deluge traffic:
sudo ufw allow 6881:6889 comment 'Deluge'
It is also possible to restrict connections from only a private subnet (e.g., 192.168.1.0/24
). To allow local access only to OpenSSH, Samba, and Deluge daemon for thin client communication:
sudo ufw allow from 192.168.1.0/24 to any app OpenSSH
sudo ufw allow from 192.168.1.0/24 to any app Samba
sudo ufw allow from 192.168.1.0/24 proto tcp to any port 58846 comment 'Deluge Daemon'
To allow incoming traffic from WireGuard interface wg0
to access OpenSSH:
sudo ufw allow in on wg0 to any app OpenSSH
To allow route from WireGuard interface wg0
to any address (that is not the host) through ethernet enp1s0
:
ufw route allow in on wg0 out on enp1s0
To delete rules:
sudo ufw delete allow 80/tcp
When the rules are complicated, a simpler, two-step alternative is to type:
sudo ufw status numbered
sudo ufw delete 10
where rule 10 is deleted.
Append the following at the end of /etc/ufw/after.rules
:
# Begin rules for Docker
*filter
:ufw-user-forward - [0:0]
:ufw-docker-forward - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER ! -i docker0 -o docker0 -j ufw-docker-forward
-A DOCKER-USER ! -i br-+ -o br-+ -j ufw-docker-forward
-A DOCKER-USER -j RETURN
-A ufw-docker-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-docker-forward -m conntrack --ctstate INVALID -j DROP
-A ufw-docker-forward -j ufw-user-forward
-A ufw-docker-forward -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW DOCKER BLOCK] "
-A ufw-docker-forward -j DROP
COMMIT
# End rules for Docker
Restart UFW afterwards.
To allow local access from 192.168.1.0/24
to port 53
of a container with an IP address 172.17.0.2
sudo ufw route allow from 192.168.1.0/24 to 172.17.0.2 port 80
To allow public access through TCP to port 32400
of a container with an IP address 172.17.0.3
sudo ufw route allow proto tcp from any to 172.17.0.3 port 32400
The following script offers automated snapshot-style backup using rsync
. It creates incremental backups of files and directories to a local backup drive or directory.
git clone https://github.com/zhen-huan-hu/rsync-snapshot.git
Save the backup script as /usr/local/sbin
and make it executable.
(Optionally) the script can be configured to read two files: /etc/backups/backup.drives
which is set by the -d
option and contains the UUIDs of the backup drives, and /etc/backups/backup.exclusions
which is set by the -e
option and contains exclusion patterns (one per line) for rsync
. Example of a backup.exclusions
would be:
/dev/*
/proc/*
/sys/*
/tmp/*
/run/*
/var/tmp/*
/var/cache/*
/var/run/*
/media/*
/mnt/*
/lost+found
Use sudo crontab -e
to add a cron job for backup as:
5 0 * * * /usr/local/sbin/rsync-snapshot.sh -e /etc/backups/backup.exclusions -d /etc/backups/backup.drives / /mnt
This does daily backup of the entire system tree at 0:05 am.