A common home-lab setup involves a Docker host with dual-stack connectivity (IPv4 + IPv6 on the host), where containers are attached to a macvlan network to appear as first-class devices on the LAN with their own IPv4 addresses.
The motivation for still wanting host-side port bindings is precisely the IPv6 gap: the macvlan network and the containers on it are IPv4-only. If you want to expose a container service over IPv6 — so that it's reachable at [host-ipv6-address]:port — you cannot do it via the macvlan IP. You need the Docker host's IPv6 address to forward traffic into the container, and that requires a working ports: binding on the host's network stack.
When a Docker container is attached to an external macvlan network, Docker sets that network as the container's primary NetworkMode. As a consequence, any ports: mappings declared in the Compose file are silently ignored — no error is raised, but docker inspect will show "Ports": {} regardless of what was declared.
This happens even if the macvlan network is listed last under networks: in the service definition. Docker always promotes the external network to NetworkMode, and port bindings only work when the primary network mode is a bridge.
"NetworkMode": "external-macvlan",
"PortBindings": {
"80/tcp": [{ "HostIp": "", "HostPort": "59999" }]
},
...
"Ports": {}, ← binding declared but never applied
Instead of relying on Docker's port binding mechanism, a lightweight rinetd sidecar container is added to the Compose stack. It lives exclusively on an internal bridge network, has a working ports: mapping on the host, and forwards traffic to the main container over that bridge.
[host :59999] → [rinetd container] → (internal bridge) → [whoami container :80]
[whoami container] → (macvlan) → LAN @ 192.168.1.x:80
This way:
- The main container retains its macvlan LAN IP and is reachable directly on the LAN over IPv4.
- The host's
ports:binding (including over IPv6) is handled by the sidecar, which is on a plain bridge network where bindings work correctly.
services:
whoami:
image: traefik/whoami
restart: unless-stopped
networks:
- internal
- external-macvlan
rinetd:
image: alpine:latest
command: >
sh -c "apk add --no-cache --repository=https://dl-cdn.alpinelinux.org/alpine/edge/testing rinetd &&
echo '0.0.0.0 59999 whoami 80' > /etc/rinetd.conf &&
rinetd -f -c /etc/rinetd.conf"
ports:
- 59999:59999/tcp
networks:
- internal
depends_on:
- whoami
networks:
external-macvlan:
external: true # pre-existing macvlan network
internal:
driver: bridge # created by this Compose filewhoamiis attached to both networks: it gets a LAN IP viaexternal-macvlanand is also reachable by name (whoami) on theinternalbridge.rinetdis attached tointernalonly. Itsports:mapping works correctly because itsNetworkModeis the bridge.- On startup, the
rinetdcontainer installs rinetd from the Alpineedge/testingrepository (where it lives), writes a minimal config, and runs it in the foreground with-f. rinetdforwards all TCP traffic arriving on0.0.0.0:59999(on the host, including IPv6 via::) towhoami:80over the internal bridge.
- The
ports:block on the main service (whoami) can be removed entirely — it is ignored by Docker and serves no purpose when an external macvlan network is attached. 0.0.0.0in the rinetd forwarding rule binds to all host interfaces. Docker will also bind the host's IPv6 address if the daemon has IPv6 enabled, making the service reachable at[::]:59999.- Any other lightweight TCP forwarder (e.g.
socat,nginx stream,HAProxy) can substitute for rinetd in the same sidecar pattern.