Goal: docker compose up → a full Azzurra testnet stack with hub + 2 leaves (IPv4 and IPv6 S2S) + services, running in isolated containers, with realistic multi-server linking. Dev / testnet only — explicitly not a production install path.
v2 scope vs v1: Hypnotize 2026-04-20 20:04 pushed back on the one-server-first approach ("go hard or go home — hub e due leaf, uno v4 e uno v6, + services"). v2 delivers full topology day-1, exercises both address families for S2S burst, /whowas, cross-leaf visibility, and services-through-hub propagation.
Scope split: this lives outside the top-level Dockerfile that already exists in both repos. Those keep their current single-container CI/build role. Compose goes under a new directory (see §Placement question).
Tech stack: Docker Compose v2 (compose.yaml) with dual-stack networks (enable_ipv6: true), small entrypoint shell per container, envsubst-based .conf templating. No third-party images beyond debian:trixie.
- Fresh contributors cannot today spin up a working network in <30 min — install is bare-metal + conf editing + cert + linking.
- Hypnotize's docker-compose ask in #it-opers 2026-04-20 19:55, refined 20:04 with full topology.
- PR #5 (NS RESETPASS) and the in-flight PR #8 (CS RESETPASS) both needed a bespoke testnet; repeatable compose cuts that setup cost to
docker compose up. - Multi-server topology is load-bearing for testing S2S burst (
RPL_SVINFO,SJOIN,NICKcollide),/whowascross-leaf, and services propagation through a hub (not direct-attach). - v4-only vs v6-only S2S links catch address-family regressions that single-stack setups miss (e.g.
addr_familyassumptions ins_bsd.c,P:binding,C/N:host-resolution path). - Not a replacement for the systemd/bare-metal prod deploy. That is explicit in the README/INSTALL.
- Production TLS / real certs. Compose generates throwaway self-signed certs on first
upand bind-mounts them read-only. - Data persistence across
docker compose down -v. Ephemeral testnets by default. Named volume for the services DB so plaindownkeeps state,down -vfull resets. - HAProxy, webirc, 6to4 tunnels, ident, stats, links-to-real-Azzurra. Zero.
- More than two leaves, or a second hub. Two leaves is sufficient to exercise both address families; bigger topologies are follow-ups.
┌─────────────────────────┐
│ services │
│ services.azzurra.chat │
│ C/N link via hub-v4 │
└──────────┬──────────────┘
│ (IPv4, docker net `svc-net`)
│
┌──────────▼──────────────┐
│ hub │
│ hub.azzurra.chat │
│ dual-stack listener │
│ 6667/6697 (clients) │
│ 7000/v4 7001/v6 S2S │
└──┬───────────────────┬──┘
│ (IPv4) │ (IPv6)
│ `leaf4-net` │ `leaf6-net`
│ │
┌─────────▼───────┐ ┌───────▼─────────┐
│ leaf-v4 │ │ leaf-v6 │
│ leaf4.azzurra │ │ leaf6.azzurra │
│ S2S over v4 │ │ S2S over v6 │
└─────────────────┘ └─────────────────┘
- hub has three docker networks:
svc-net(services),leaf4-net(IPv4-only),leaf6-net(IPv6-only,enable_ipv6: true+ fixedfd00:azz::/64ULA). - leaf-v4 attaches only to
leaf4-net— itsC/Nblock to the hub resolves to the hub's IPv4 address on that net. - leaf-v6 attaches only to
leaf6-net— itsC/Nblock to the hub resolves to the hub's IPv6 address on that net. - services attaches only to
svc-net— links to the hub over IPv4 (services S2S doesn't need v6 coverage; leaves already exercise the v6 path). - Client-facing ports (6667/6697) exposed to host only from hub + both leaves, on distinct host ports (hub 6667/6697, leaf4 6668/6698, leaf6 6669/6699) so irssi/weechat can connect to each.
Two candidate layouts (unchanged from v1):
- In
azzurra/bahamutunderdocker/compose/— pullsservicesas a git submodule or builds from a parent image. - In
azzurra/servicesunderdocker/compose/— pullsbahamutas submodule / build. - New repo
azzurra/testnet-compose— neutral ground, references both as submodules.
Recommendation: option 3. Compose is a cross-cutting dev tool; neither of the two repos owns it naturally. Submodule pinning means the testnet tracks verified revisions (good for debugging PR-against-PR). Now stronger with v2's topology: the compose file is non-trivial and deserves its own review surface.
testnet-compose/ # whichever repo/dir wins the placement question
├── README.md
├── compose.yaml
├── .env.example
├── bahamut/
│ ├── Dockerfile # debian:trixie + deps + build, one image for all 3 roles
│ ├── options.h_hub # copied from buildbot/options.h_hub
│ ├── entrypoint.sh # render conf -> exec ircd -F, role-aware
│ ├── conf.hub.tmpl # hub bahamut.conf template
│ ├── conf.leaf4.tmpl # leaf-v4 bahamut.conf template
│ └── conf.leaf6.tmpl # leaf-v6 bahamut.conf template
├── services/
│ ├── Dockerfile # debian:trixie + deps + build
│ ├── entrypoint.sh # render conf -> exec services
│ └── conf.tmpl # services.conf template
├── certs/
│ └── gen-cert.sh # generates 4 throwaway self-signed certs on first run
└── scripts/
└── smoke.sh # connects to each server, verifies burst + services
One bahamut image, three roles. The bahamut/Dockerfile is built once; hub/leaf4/leaf6 containers all use the same image, differentiated by SERVER_ROLE env var which the entrypoint reads to pick the right conf.*.tmpl. Saves build time and ensures the three ircds are binary-identical (the only prod-relevant combination).
name: azzurra-testnet
x-bahamut-build: &bahamut-build
build:
context: ./bahamut
args:
BAHAMUT_REF: ${BAHAMUT_REF:-master}
services:
cert-init:
image: alpine:3.20
volumes:
- ./certs:/certs
command: ["/bin/sh", "/certs/gen-cert.sh"]
# runs once, exits 0 if certs already exist
hub:
<<: *bahamut-build
depends_on:
cert-init:
condition: service_completed_successfully
environment:
- SERVER_ROLE=hub
- SERVER_NAME=hub.azzurra.chat
- SERVER_DESC=Azzurra testnet — hub
- SERVICES_PASSWORD=${SERVICES_PASSWORD:-testlink}
- LEAF_PASSWORD=${LEAF_PASSWORD:-testleaf}
- OPER_NICK=${OPER_NICK:-testoper}
- OPER_PASS=${OPER_PASS:-testoperpass}
- LISTEN_PORT=6667
- LISTEN_SSL_PORT=6697
- LINK_PORT_V4=7000
- LINK_PORT_V6=7001
ports:
- "6667:6667"
- "6697:6697"
volumes:
- ./certs:/etc/bahamut/certs:ro
networks:
svc-net:
aliases: [hub.azzurra.chat]
leaf4-net:
aliases: [hub.azzurra.chat]
leaf6-net:
aliases: [hub.azzurra.chat]
healthcheck:
test: ["CMD", "sh", "-c", "echo > /dev/tcp/127.0.0.1/6667"]
interval: 5s
timeout: 2s
retries: 20
leaf-v4:
<<: *bahamut-build
depends_on:
hub:
condition: service_healthy
environment:
- SERVER_ROLE=leaf4
- SERVER_NAME=leaf4.azzurra.chat
- SERVER_DESC=Azzurra testnet — leaf-v4
- HUB=hub.azzurra.chat
- HUB_PORT=7000
- LEAF_PASSWORD=${LEAF_PASSWORD:-testleaf}
- OPER_NICK=${OPER_NICK:-testoper}
- OPER_PASS=${OPER_PASS:-testoperpass}
- LISTEN_PORT=6667
- LISTEN_SSL_PORT=6697
ports:
- "6668:6667"
- "6698:6697"
volumes:
- ./certs:/etc/bahamut/certs:ro
networks:
- leaf4-net
leaf-v6:
<<: *bahamut-build
depends_on:
hub:
condition: service_healthy
environment:
- SERVER_ROLE=leaf6
- SERVER_NAME=leaf6.azzurra.chat
- SERVER_DESC=Azzurra testnet — leaf-v6
- HUB=hub.azzurra.chat
- HUB_PORT=7001
- LEAF_PASSWORD=${LEAF_PASSWORD:-testleaf}
- OPER_NICK=${OPER_NICK:-testoper}
- OPER_PASS=${OPER_PASS:-testoperpass}
- LISTEN_PORT=6667
- LISTEN_SSL_PORT=6697
ports:
- "6669:6667"
- "6699:6697"
volumes:
- ./certs:/etc/bahamut/certs:ro
networks:
- leaf6-net
services:
build:
context: ./services
args:
SERVICES_REF: ${SERVICES_REF:-master}
depends_on:
hub:
condition: service_healthy
environment:
- SERVICES_NAME=services.azzurra.chat
- HUB=hub.azzurra.chat
- HUB_PORT=7000
- SERVICES_PASSWORD=${SERVICES_PASSWORD:-testlink}
- SERVICES_DESC=Azzurra IRC Services
volumes:
- services-data:/var/lib/services
networks:
- svc-net
volumes:
services-data:
networks:
svc-net:
driver: bridge
leaf4-net:
driver: bridge
ipam:
config:
- subnet: 172.30.4.0/24
leaf6-net:
enable_ipv6: true
driver: bridge
ipam:
config:
- subnet: fd00:a22e::/64Files
- Create:
README.md,compose.yaml,.env.example,.gitignore,scripts/smoke.sh.
Steps
- 1.1 Create the directory and empty stubs for every file in the layout above.
- 1.2 Write
README.md: "Dev/testnet only. Not for prod.docker compose upgives you hub + v4 leaf + v6 leaf + services." Link both repos' INSTALL.md for the production path. Document host ports (6667/6697 hub, 6668/6698 leaf4, 6669/6699 leaf6). - 1.3 Write
.env.examplelisting every configurable knob with safe defaults. - 1.4 Write
.gitignoreforcerts/*.pem,certs/*.key,.env,__pycache__/. - 1.5 Commit
chore: scaffold testnet-compose layout (hub+2leaf+services).
Files
- Create:
bahamut/Dockerfile,bahamut/options.h_hub,bahamut/entrypoint.sh,bahamut/conf.hub.tmpl,bahamut/conf.leaf4.tmpl,bahamut/conf.leaf6.tmpl.
Steps
-
2.1 Write
bahamut/Dockerfilewith two stages: builder (debian:trixie+ build deps) and runtime (debian:trixie-slim+ just enough libs + the compiledircd+gettext-baseforenvsubst).FROM debian:trixie AS build ARG BAHAMUT_REF=master RUN apt-get update && apt-get install -y --no-install-recommends \ autoconf build-essential libssl-dev zlib1g-dev libcrypt-dev \ ca-certificates git WORKDIR /src RUN git clone --depth=1 --branch=${BAHAMUT_REF} https://github.com/azzurra/bahamut.git . COPY options.h_hub include/options.h RUN autoconf && ./configure && make FROM debian:trixie-slim RUN apt-get update && apt-get install -y --no-install-recommends \ libssl3 zlib1g libcrypt1 ca-certificates gettext-base \ && rm -rf /var/lib/apt/lists/* COPY --from=build /src/src/ircd /usr/local/sbin/ircd COPY entrypoint.sh /usr/local/bin/entrypoint.sh COPY conf.hub.tmpl conf.leaf4.tmpl conf.leaf6.tmpl /etc/bahamut/ RUN chmod +x /usr/local/bin/entrypoint.sh \ && useradd -r -s /usr/sbin/nologin -d /var/lib/bahamut ircd \ && mkdir -p /var/lib/bahamut /etc/bahamut/certs \ && chown -R ircd: /var/lib/bahamut /etc/bahamut USER ircd EXPOSE 6667 6697 7000 7001 ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
-
2.2 Write
bahamut/entrypoint.sh— role-aware:#!/bin/sh set -eu : "${SERVER_ROLE:?}" "${SERVER_NAME:?}" "${LISTEN_PORT:?}" "${LISTEN_SSL_PORT:?}" case "${SERVER_ROLE}" in hub) tmpl=/etc/bahamut/conf.hub.tmpl ;; leaf4) tmpl=/etc/bahamut/conf.leaf4.tmpl ;; leaf6) tmpl=/etc/bahamut/conf.leaf6.tmpl ;; *) echo "unknown SERVER_ROLE: ${SERVER_ROLE}" >&2; exit 2 ;; esac envsubst < "${tmpl}" > /etc/bahamut/bahamut.conf exec /usr/local/sbin/ircd -F -f /etc/bahamut/bahamut.conf
-
2.3 Write
bahamut/conf.hub.tmpl. Must accept services onLINK_PORT_V4, leaf-v4 onLINK_PORT_V4, leaf-v6 onLINK_PORT_V6. Minimal shape:M:${SERVER_NAME}:*:${SERVER_DESC}: A:Azzurra:testnet Y:1:90:0:20:500000 Y:50:90:300:10:1000000 I:*@*::*@*::1 O:*@*:${OPER_PASS_HASH}:${OPER_NICK}::1 P:*:::${LISTEN_PORT} P:*::S:${LISTEN_SSL_PORT} P:*:::${LINK_PORT_V4} P:::::${LINK_PORT_V6} # v6 listener — bind any6 # Services link (v4, svc-net) C:services.azzurra.chat:${SERVICES_PASSWORD}:services.azzurra.chat:${LINK_PORT_V4}:50 N:services.azzurra.chat:${SERVICES_PASSWORD}:services.azzurra.chat::50 H:*:*:services.azzurra.chat U:*:*:services.azzurra.chat # Leaf-v4 link (v4, leaf4-net) C:leaf4.azzurra.chat:${LEAF_PASSWORD}:leaf4.azzurra.chat:${LINK_PORT_V4}:50 N:leaf4.azzurra.chat:${LEAF_PASSWORD}:leaf4.azzurra.chat::50 H:*:*:leaf4.azzurra.chat # Leaf-v6 link (v6, leaf6-net) — hostname resolves to v6 via docker DNS C:leaf6.azzurra.chat:${LEAF_PASSWORD}:leaf6.azzurra.chat:${LINK_PORT_V6}:50 N:leaf6.azzurra.chat:${LEAF_PASSWORD}:leaf6.azzurra.chat::50 H:*:*:leaf6.azzurra.chat -
2.4 Write
bahamut/conf.leaf4.tmpl— outbound autoconnect to hub on v4:M:${SERVER_NAME}:*:${SERVER_DESC}: A:Azzurra:testnet Y:1:90:0:20:500000 Y:50:90:300:10:1000000 I:*@*::*@*::1 O:*@*:${OPER_PASS_HASH}:${OPER_NICK}::1 P:*:::${LISTEN_PORT} P:*::S:${LISTEN_SSL_PORT} C:${HUB}:${LEAF_PASSWORD}:${HUB}:${HUB_PORT}:50 N:${HUB}:${LEAF_PASSWORD}:${HUB}::50 H:*:*:${HUB} -
2.5 Write
bahamut/conf.leaf6.tmpl— outbound autoconnect to hub on v6. Key difference:C:line's 4th field (host) must resolve to a v6 address; confirm viagetent ahostsor an explicit v6 literal once docker-compose-v2 DNS resolution semantics onenable_ipv6networks is verified (see §Review checklist).M:${SERVER_NAME}:*:${SERVER_DESC}: A:Azzurra:testnet Y:1:90:0:20:500000 Y:50:90:300:10:1000000 I:*@*::*@*::1 O:*@*:${OPER_PASS_HASH}:${OPER_NICK}::1 P:*:::${LISTEN_PORT} P:*::S:${LISTEN_SSL_PORT} C:${HUB}:${LEAF_PASSWORD}:${HUB}:${HUB_PORT}:50 N:${HUB}:${LEAF_PASSWORD}:${HUB}::50 H:*:*:${HUB} -
2.6 Commit
feat(bahamut): role-aware image (hub/leaf4/leaf6).
Files
- Create:
services/Dockerfile,services/entrypoint.sh,services/conf.tmpl.
Steps
-
3.1 Write
services/Dockerfile, same two-stage pattern:FROM debian:trixie AS build ARG SERVICES_REF=master RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential libc6-dev python3 ca-certificates git WORKDIR /src RUN git clone --depth=1 --branch=${SERVICES_REF} https://github.com/azzurra/services.git . RUN ./configure && python3 lang/langcomp.py && make FROM debian:trixie-slim RUN apt-get update && apt-get install -y --no-install-recommends \ libc6 gettext-base ca-certificates \ && rm -rf /var/lib/apt/lists/* COPY --from=build /src/services /usr/local/sbin/services COPY --from=build /src/run/data /usr/local/share/services/data COPY entrypoint.sh /usr/local/bin/entrypoint.sh COPY conf.tmpl /etc/services/services.conf.tmpl RUN chmod +x /usr/local/bin/entrypoint.sh \ && useradd -r -s /usr/sbin/nologin -d /var/lib/services services \ && mkdir -p /var/lib/services /etc/services \ && chown -R services: /var/lib/services /etc/services USER services ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
-
3.2 Write
services/entrypoint.sh:#!/bin/sh set -eu : "${HUB:?}" "${HUB_PORT:?}" "${SERVICES_PASSWORD:?}" "${SERVICES_NAME:?}" envsubst < /etc/services/services.conf.tmpl > /etc/services/services.conf cd /var/lib/services exec /usr/local/sbin/services -debug -dir /usr/local/share/services/data -conf /etc/services/services.conf
(Verify
-debug/-dir/-confflag names againstsrc/services.cbefore shipping. Placeholder — must confirm.) -
3.3 Write
services/conf.tmpl, grounded inazzurra/services/doc/services.conf.example:C:${SERVICES_NAME}:${SERVICES_PASSWORD}:${HUB}:${HUB_PORT} D:${SERVICES_DESC} U:service:azzurra.chat A:Azzurra M:testmaster -
3.4 Commit
feat(services): minimal services container.
Files
- Create:
certs/gen-cert.sh,certs/.gitignore.
Steps
- 4.1 Write
certs/gen-cert.sh. Generate four certs:hub.pem,leaf4.pem,leaf6.pem,services.pem(the last one only if services is wired for TLS S2S; defer if not). CN matches the server-name env for each role.#!/bin/sh set -eu cd "$(dirname "$0")" for cn in hub leaf4 leaf6; do [ -s "${cn}.pem" ] && continue openssl req -x509 -newkey rsa:2048 -nodes -days 365 \ -subj "/CN=${cn}.azzurra.chat" \ -keyout "${cn}.key" -out "${cn}.pem" cat "${cn}.key" >> "${cn}.pem" done
- 4.2 Wire in as
cert-initservice withdepends_on.condition: service_completed_successfullyblocking all three bahamut containers (as shown incompose.yamlabove). - 4.3 Commit
feat(certs): throwaway self-signed certs per server.
Files
- Create:
scripts/smoke.sh,.github/workflows/testnet.yml.
Steps
- 5.1
scripts/smoke.shdoes:- Start stack (
docker compose up -d --wait). - Wait for all three bahamut containers to reach healthy state and for services to log
netinfo(burst done). - Connect a test client to each of hub/leaf4/leaf6 and verify:
/serverlists all three ircds + services./mapshows hub at the root with leaf4+leaf6 as children.- A nick joined on leaf4 is visible from leaf6 (
/whoiscross-server). /nickserv register+/nickserv identifyround-trip via services through the hub.- After disconnecting the nick,
/whowasreturns it on all three ircds.
- Tear down (
docker compose down -v). - Non-zero exit on any failure.
- Start stack (
- 5.2 CI workflow
.github/workflows/testnet.yml:docker compose up -d --wait,bash scripts/smoke.sh,docker compose down -v. Matrix{BAHAMUT_REF, SERVICES_REF}={master, master}for now. GitHub-hosted runners support IPv6 in Docker (confirm in §Review checklist). - 5.3 Commit
test: smoke hub+leaf4+leaf6+services stack.
- 6.1
README.md: quickstart (docker compose up),.envknobs, host-port map (6667/6697 hub, 6668/6698 leaf4, 6669/6699 leaf6), how to reset (docker compose down -v), "NOT FOR PROD" banner, and a short note on the v4/v6 split so new contributors know why three ircds. - 6.2 Open PR against the winning placement (probably
azzurra/testnet-composenew repo).
- Placement: new
testnet-composerepo vs subfolder in one of the existing repos. Recommendation: new repo (stronger with v2's size). - IPv6 on GH-hosted runners: Docker Engine on
ubuntu-latestrunners historically needsdaemon.jsontweaking forenable_ipv6. Need to confirm or fall back toworkflow_dispatch-only CI with a documented local-run flow. - C/N shape in hub conf (three linked ircds + services): confirm no
Y:class collision, confirmH:*:*:shape for each, confirm hub accepts both leaf autoconnects simultaneously during burst. -
P:::::${LINK_PORT_V6}binding: bahamutP:line syntax for IPv6-bind needs source verification (s_bsd.c:add_listener+make_listener). Might needP:::I:${LINK_PORT_V6}with the IPv6 bind flag — placeholder, to confirm before shipping the template. - Docker DNS v4/v6 resolution on mixed networks: when
hub.azzurra.chatis on three networks (svc-netv4,leaf4-netv4,leaf6-netv6), what does a container onleaf6-netresolve? Docker's embedded DNS returns per-network aliases; confirm leaf-v6 gets the v6 address onleaf6-net. If not, switchconf.leaf6.tmpl'sC:host field to an explicit ULA literal. - Port mapping: hub 6667/6697, leaf4 6668/6698, leaf6 6669/6699 to host. Link ports 7000/7001 stay internal. Acceptable?
- Cert strategy: throwaway self-signed per server + bind-mount read-only vs one-cert-for-all with SAN entries. Current plan = one cert per server CN. Acceptable?
- Services
-debug/-dir/-confflags: still need verification againstsrc/services.cbefore finalisingservices/entrypoint.sh. Placeholder. - Oper password hashing: bahamut wants a hashed password in the O-line. Options: (a) pre-hash on container build (bad — rebakes image per change), (b)
entrypoint.shcallsmkpasswdat boot (OK, addswhoispkg), (c) accept plaintext in.envand hash via a small Python snippet on entry. Need input. - CI cost: full stack (4 containers) spin + smoke ≈ 3-5 minutes per PR. Acceptable on existing matrix, or move to
workflow_dispatchonly?
- Three-or-more-leaf topologies.
- Second hub (hub-hub linking — different C/N shape, burst-of-bursts).
- Telegram bridge (cristobot integration).
- Persistent helper-cert store across restarts via a
configdocker-compose stanza. - Load-gen harness (fake clients across all three ircds).
- webirc / HAProxy in front of hub.
Entry point for execution after sign-off: subagent-driven-development skill with this plan. Each task gets a fresh subagent, two-stage review. Tasks 2 + 3 + 4 are parallelisable once Task 1 lands.