If you're running Tailscale with an exit node (e.g. Mullvad) on a server that also runs Docker containers, you may hit a maddening issue: containers appear healthy but are unreachable from the host.
curl http://127.0.0.1:<port>to a Docker container hangs (TCP connects, but HTTP times out)docker execinto the container and curlinglocalhostworks fine- Docker healthchecks pass
- Restarting Docker or Tailscale sometimes fixes it temporarily
- The problem comes back silently
When you use tailscale up --exit-node=..., Tailscale inserts ip policy rules (check ip rule list) at priority 5210-5270 that route traffic through itself. This includes traffic to Docker bridge networks (172.16.0.0/12).
So when you hit 127.0.0.1:3001 mapped to a container on 172.17.0.x, the return traffic (or even the initial routing) goes through Tailscale and out to your exit node instead of staying on the local Docker bridge. The TCP handshake might complete (SYN/ACK happens locally) but HTTP payload gets blackholed.
Add an ip rule that bypasses Tailscale for Docker bridge traffic:
ip -4 rule add to 172.16.0.0/12 pref 5002 lookup mainThe priority (pref) must be lower than 5270 (Tailscale's first rule) so it matches first.
Here's the catch: Tailscale can re-add its routing rules at any time — on reconnect, key rotation, exit node changes — and this wipes your bypass rules. The networkd-dispatcher "routable" hook only fires on interface state changes, which doesn't cover Tailscale's internal reconnects.
The reliable solution is a systemd timer that re-checks every 30 seconds:
/etc/networkd-dispatcher/routable.d/50-tailscale-bypass (the actual script):
#!/bin/bash
# Bypass Tailscale exit node for Docker bridge networks
# Rules must have priority < 5270 (Tailscale's priority)
ip -4 rule add to 172.16.0.0/12 pref 5002 lookup main 2>/dev/null || true/etc/systemd/system/tailscale-bypass-rules.service:
[Unit]
Description=Ensure Tailscale bypass ip rules are present
After=tailscaled.service
[Service]
Type=oneshot
ExecStart=/etc/networkd-dispatcher/routable.d/50-tailscale-bypass/etc/systemd/system/tailscale-bypass-rules.timer:
[Unit]
Description=Re-check Tailscale bypass rules every 30s
[Timer]
OnBootSec=10
OnUnitActiveSec=30
[Install]
WantedBy=timers.targetEnable it:
sudo systemctl daemon-reload
sudo systemctl enable --now tailscale-bypass-rules.timerThe ip rule add commands are idempotent (they silently fail if the rule already exists), so running every 30 seconds costs nothing when rules are already in place.
# Check rules are present
ip -4 rule list | grep 5002
# Test container access
curl -s http://127.0.0.1:<your-port>/health
# Watch the timer
systemctl status tailscale-bypass-rules.timer- Container healthchecks run inside the container, where localhost always works. They'll never catch this.
docker psshows the container as running and healthy.- The issue is intermittent — it only happens when Tailscale refreshes its rules, which can be days or weeks apart.
- TCP connects but HTTP hangs, so it doesn't look like a routing problem at first glance.
If you're on an IPv6-only host using Tailscale's exit node for IPv4, you may also need bypasses for regional services:
# Example: Hetzner apt mirror
ip -6 rule add to 2a01:4ff:ff00::3:3 pref 5000 lookup main 2>/dev/null || trueSame principle: anything that should go direct instead of through the exit node needs a rule with priority < 5270.