This is how I set up my homelab, step by step, in roughly the order I actually did it. Each step builds on the previous one. Read until you hit where you're at and stop.
The first thing to get right is flake.nix. This is your entire system config: users, networking, firewall, packages, services. Everything in one file.
Here's a minimal starting point. Set a static IP so your other devices can always find the server. Enable Docker and SSH. Disable sleep. That's it.
{
description = "NixOS Homelab";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
};
outputs = { self, nixpkgs, ... }: {
nixosConfigurations."thinkcentre" = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./hardware-configuration.nix
({ pkgs, lib, ... }: {
# ── Users ──────────────────────────────────────────
users.users.youruser = {
isNormalUser = true;
shell = pkgs.zsh;
extraGroups = [ "wheel" "networkmanager" "docker" ];
# Generate with: mkpasswd -m sha-512
hashedPasswordFile = "/etc/nixos/hashed-password";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAA... your-key"
];
};
programs.zsh.enable = true;
security.sudo.wheelNeedsPassword = false;
# ── Boot ───────────────────────────────────────────
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
# ── Networking ─────────────────────────────────────
# Give it a static IP so you can always find it.
networking.hostName = "thinkcentre";
networking.networkmanager.enable = true;
networking.wireless.enable = lib.mkForce false;
networking.networkmanager.ensureProfiles.profiles = {
"Wired Static" = {
connection = {
id = "Wired Static";
type = "ethernet";
interface-name = "eno2"; # check yours with `ip link`
autoconnect = "true";
};
ipv4 = {
method = "manual";
address1 = "192.168.X.X/24,192.168.X.1"; # your IP, your gateway
dns = "192.168.X.1;";
};
};
};
# ── Firewall ───────────────────────────────────────
# Start with just SSH. Open more ports as you add services.
networking.firewall = {
enable = true;
allowedTCPPorts = [ 80 443 ];
};
# ── Docker ─────────────────────────────────────────
virtualisation.docker.enable = true;
# ── SSH ────────────────────────────────────────────
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
PermitRootLogin = "no";
};
};
# ── Stay awake (it's a server) ─────────────────────
services.logind.settings.Login = {
HandleLidSwitch = "ignore";
IdleAction = "ignore";
};
systemd.targets.sleep.enable = false;
systemd.targets.suspend.enable = false;
systemd.targets.hibernate.enable = false;
systemd.targets.hybrid-sleep.enable = false;
# ── Misc ───────────────────────────────────────────
nixpkgs.config.allowUnfree = true;
nix.settings = {
experimental-features = [ "nix-command" "flakes" ];
auto-optimise-store = true;
};
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
system.stateVersion = "25.05";
time.timeZone = "Europe/Bucharest";
})
];
};
};
}Apply it with sudo nixos-rebuild switch --flake .#thinkcentre.
This was my first service. Create a directory for it, drop a docker-compose.yml in it, and run it.
mkdir -p ~/adguard && cd ~/adguard
# adguard/docker-compose.yml
services:
adguard:
image: adguard/adguardhome
container_name: adguard
ports:
- "53:53/tcp"
- "53:53/udp"
- "8080:80" # Web UI
- "3000:3000" # Setup wizard on first run
volumes:
- adguard_conf:/opt/adguardhome/conf
- adguard_work:/opt/adguardhome/work
restart: unless-stopped
volumes:
adguard_conf:
adguard_work:docker compose up -d
Go to http://YOUR_SERVER_IP:3000 and finish the setup wizard. Then point your router's DHCP DNS setting to the server's IP. Every device on your network now has ad blocking.
Open the DNS ports in your flake:
networking.firewall.allowedTCPPorts = [ 80 443 53 853 ];
networking.firewall.allowedUDPPorts = [ 53 ];Once you have 2-3 services you'll want subdomains instead of remembering port numbers. Caddy handles HTTPS automatically.
# caddy/docker-compose.yml
services:
caddy:
image: caddy:latest
container_name: caddy
network_mode: host
volumes:
- caddy_data:/data
- caddy_config:/config
- ./Caddyfile:/etc/caddy/Caddyfile:ro
restart: always
volumes:
caddy_data:
caddy_config:Since each service binds its port on the host, Caddy just proxies to localhost:PORT:
# caddy/Caddyfile
adguard.yourdomain.com {
reverse_proxy localhost:8080
}
plex.yourdomain.com {
reverse_proxy localhost:32400
}
jellyfin.yourdomain.com {
reverse_proxy localhost:8096
}That's it. Caddy gets TLS certificates automatically. Add a new subdomain block for each service.
One useful thing: point Docker's DNS at your server (where AdGuard runs) so containers can resolve your local domains too. Add this to your flake:
virtualisation.docker.daemon.settings = {
dns = [ "192.168.X.X" ]; # your server's IP
};If you have a 12th gen Intel or newer, you get QuickSync hardware transcoding for free. You just need the right NixOS drivers and to pass through /dev/dri.
Add this to your flake:
hardware.graphics = {
enable = true;
extraPackages = with pkgs; [
intel-media-driver # VAAPI for 12th gen+
vpl-gpu-rt # Intel Video Processing Library
intel-compute-runtime # OpenCL
];
};
networking.firewall.allowedTCPPorts = [ ... 32400 ]; # add Plex# plex/docker-compose.yml
services:
plex:
image: ghcr.io/hotio/plex:latest
container_name: plex
environment:
TZ: Europe/Bucharest
PUID: "1000"
PGID: "100"
ADVERTISE_IP: "https://plex.yourdomain.com:443"
ALLOWED_NETWORKS: "192.168.0.0/255.255.255.0"
volumes:
- plex_config:/config
- /mnt/nas/media:/data/media:ro # NAS mount (see step 5)
network_mode: host # Plex needs host networking for discovery
devices:
- /dev/dri:/dev/dri # Intel QuickSync passthrough
tmpfs:
- /transcode # Transcode to RAM, not disk
restart: unless-stopped
volumes:
plex_config:If you have a Synology or any NAS with SMB shares, mount them in NixOS. The key is x-systemd.automount so it connects on-demand and doesn't block boot if the NAS is off.
Add cifs-utils to your packages and add the mount to your flake:
environment.systemPackages = with pkgs; [ cifs-utils ];
# Store credentials outside the Nix store:
# sudo bash -c 'printf "username=you\npassword=YOUR_PASS\n" > /var/lib/nas-credentials'
# sudo chmod 600 /var/lib/nas-credentials
fileSystems."/mnt/nas/media" = {
device = "//NAS_IP/media";
fsType = "cifs";
options = [
"credentials=/var/lib/nas-credentials" "uid=1000" "gid=100"
"nofail" "x-systemd.automount" "x-systemd.idle-timeout=60"
"x-systemd.device-timeout=5s" "x-systemd.mount-timeout=5s"
];
};Now Plex (and anything else) can read from /mnt/nas/media.
This is the *arr stack: Sonarr (TV), Radarr (movies), Prowlarr (indexers), qBittorrent (downloads), and Jellyfin (playback). I run this on my Synology NAS but it works on any Docker host.
The important thing is the directory structure. All services share the same data directory so Sonarr/Radarr can hardlink completed downloads instead of copying them (saving a lot of disk space).
/your/media/directory/
├── media/
│ ├── movies/
│ └── tv/
└── torrents/
├── movies/
└── tv/
Create a .env file:
MEDIA_SERVER_DATA_DIR=/path/to/your/media/directory
PUID=1000
PGID=100
TZ=Europe/Bucharest# media-server/docker-compose.yml
x-common: &common
environment: &common-env
PUID: ${PUID}
PGID: ${PGID}
TZ: ${TZ}
UMASK: 002
services:
sonarr:
<<: *common
container_name: sonarr
image: ghcr.io/hotio/sonarr:release
ports: ["8989:8989"]
volumes:
- ./sonarr_config:/config
- ${MEDIA_SERVER_DATA_DIR}:/data
radarr:
<<: *common
container_name: radarr
image: ghcr.io/hotio/radarr:release
ports: ["7878:7878"]
volumes:
- ./radarr_config:/config
- ${MEDIA_SERVER_DATA_DIR}:/data
prowlarr:
<<: *common
container_name: prowlarr
image: ghcr.io/hotio/prowlarr:testing
ports: ["9696:9696"]
volumes:
- ./prowlarr_config:/config
qbittorrent:
<<: *common
container_name: qbittorrent
image: ghcr.io/hotio/qbittorrent:release
ports:
- "6881:6881/tcp"
- "6881:6881/udp"
- "9865:9865"
environment:
<<: *common-env
WEBUI_PORTS: "9865/tcp,9865/udp"
volumes:
- ./qbittorrent_config:/config
- ${MEDIA_SERVER_DATA_DIR}/torrents:/data/torrents
jellyfin:
<<: *common
container_name: jellyfin
image: jellyfin/jellyfin:latest
ports:
- "8096:8096"
- "7359:7359/udp"
volumes:
- ./jellyfin_config:/config
- ./jellyfin_config/cache:/cache
- ${MEDIA_SERVER_DATA_DIR}:/data:ro
restart: unless-stoppedOnce everything is running, set up Recyclarr to automatically sync TRaSH Guides quality profiles into Sonarr and Radarr. This saves you from manually configuring quality profiles:
# recyclarr.yml
radarr:
uhd-bluray-web:
base_url: !env_var RADARR_BASE_URL
api_key: !env_var RADARR_API_KEY
include:
- template: radarr-quality-definition-movie
- template: radarr-quality-profile-uhd-bluray-web
- template: radarr-custom-formats-uhd-bluray-web
sonarr:
web-2160p-v4:
base_url: !env_var SONARR_BASE_URL
api_key: !env_var SONARR_API_KEY
include:
- template: sonarr-quality-definition-series
- template: sonarr-v4-quality-profile-web-2160p
- template: sonarr-v4-custom-formats-web-2160pAt this point I was tired of SSHing into the server every time I wanted to edit a config file. I set up Syncthing to mirror my ~/Projects/ directory between my Mac and the ThinkCentre. Now I edit on my Mac and the changes show up on the server in about 15 seconds.
Add this to your flake:
services.syncthing = {
enable = true;
user = "youruser";
group = "users";
dataDir = "/home/youruser";
configDir = "/home/youruser/.config/syncthing";
guiAddress = "0.0.0.0:8384";
settings = {
devices.your-other-machine = {
id = "DEVICE-ID-FROM-SYNCTHING-UI";
addresses = [ "dynamic" ];
autoAcceptFolders = true;
};
folders.Projects = {
id = "some-folder-id";
path = "/home/youruser/Projects";
devices = [ "your-other-machine" ];
fsWatcherDelayS = 1;
fsWatcherEnabled = true;
versioning = {
type = "staggered";
params = {
cleanInterval = "3600";
maxAge = "2592000"; # 30 days
};
};
};
};
};Then add a CDPATH so you can cd caddy from anywhere instead of typing the full path. I do this via home-manager but you can just add it to your .zshrc:
export CDPATH=".:$HOME/Projects/dotfiles/hosts/thinkcentre:$HOME/Projects"The workflow becomes: ssh thinkcentre, cd caddy, docker compose up -d.
Instead of opening ports to the internet, use Tailscale. You can access everything from anywhere as if you were on your LAN.
services.tailscale = {
enable = true;
openFirewall = true;
useRoutingFeatures = "server";
extraUpFlags = [
"--ssh" # SSH via Tailscale identity
"--advertise-exit-node" # use as VPN exit node
"--advertise-routes=192.168.0.0/24" # expose LAN to Tailscale
"--accept-dns=false" # we run our own DNS
];
};
networking.firewall.trustedInterfaces = [ "tailscale0" ];dotfiles/
├── common/ # Shared config across all machines
│ ├── git/config
│ ├── shell/zshrc
│ └── ssh/hosts
├── hosts/
│ ├── thinkcentre/ # NixOS server
│ │ ├── flake.nix # Entire OS config
│ │ ├── hardware-configuration.nix
│ │ ├── adguard/
│ │ │ └── docker-compose.yml # Each service gets its own directory
│ │ ├── caddy/
│ │ │ ├── Caddyfile # Reverse proxy for everything
│ │ │ ├── docker-compose.yml
│ │ │ └── Dockerfile # Custom build with Cloudflare DNS plugin
│ │ ├── plex/
│ │ │ └── docker-compose.yml # Intel QuickSync hardware transcoding
│ │ ├── home-assistant/
│ │ │ └── docker-compose.yml
│ │ ├── vaultwarden/
│ │ │ └── docker-compose.yml
│ │ └── ...
│ └── synology/ # Synology NAS
│ └── media-server/
│ ├── docker-compose.yml # Sonarr, Radarr, Prowlarr, qBittorrent, Jellyfin
│ └── recyclarr.yml # TRaSH Guides quality profiles
├── secrets/ # Age-encrypted secrets (via agenix)
└── Justfile # `just rebuild` to apply NixOS changes
The system-level stuff (networking, firewall, users, SSH, Syncthing, Tailscale) is all in flake.nix and managed by NixOS. My application services are still plain Docker Compose files, one per directory. I haven't gone full NixOS for everything. It's simpler and I haven't felt the need to change it.
This is the setup I ended up on and it works nicely for me.