Skip to content

Instantly share code, notes, and snippets.

@davidar
Created April 11, 2026 00:51
Show Gist options
  • Select an option

  • Save davidar/8b3626513ebfb2f392743b13ce912fcb to your computer and use it in GitHub Desktop.

Select an option

Save davidar/8b3626513ebfb2f392743b13ce912fcb to your computer and use it in GitHub Desktop.
Tailscale exit node breaks Docker localhost access — diagnosis and fix with systemd timer

Tailscale Exit Node Breaks Docker Localhost Access (and How to Fix It)

If you're running Tailscale with an exit node (e.g. Mullvad) on a server that also runs Docker containers, you may hit a maddening issue: containers appear healthy but are unreachable from the host.

Symptoms

  • curl http://127.0.0.1:<port> to a Docker container hangs (TCP connects, but HTTP times out)
  • docker exec into the container and curling localhost works fine
  • Docker healthchecks pass
  • Restarting Docker or Tailscale sometimes fixes it temporarily
  • The problem comes back silently

What's Happening

When you use tailscale up --exit-node=..., Tailscale inserts ip policy rules (check ip rule list) at priority 5210-5270 that route traffic through itself. This includes traffic to Docker bridge networks (172.16.0.0/12).

So when you hit 127.0.0.1:3001 mapped to a container on 172.17.0.x, the return traffic (or even the initial routing) goes through Tailscale and out to your exit node instead of staying on the local Docker bridge. The TCP handshake might complete (SYN/ACK happens locally) but HTTP payload gets blackholed.

The Fix

Add an ip rule that bypasses Tailscale for Docker bridge traffic:

ip -4 rule add to 172.16.0.0/12 pref 5002 lookup main

The priority (pref) must be lower than 5270 (Tailscale's first rule) so it matches first.

Making It Persistent

Here's the catch: Tailscale can re-add its routing rules at any time — on reconnect, key rotation, exit node changes — and this wipes your bypass rules. The networkd-dispatcher "routable" hook only fires on interface state changes, which doesn't cover Tailscale's internal reconnects.

The reliable solution is a systemd timer that re-checks every 30 seconds:

/etc/networkd-dispatcher/routable.d/50-tailscale-bypass (the actual script):

#!/bin/bash
# Bypass Tailscale exit node for Docker bridge networks
# Rules must have priority < 5270 (Tailscale's priority)
ip -4 rule add to 172.16.0.0/12 pref 5002 lookup main 2>/dev/null || true

/etc/systemd/system/tailscale-bypass-rules.service:

[Unit]
Description=Ensure Tailscale bypass ip rules are present
After=tailscaled.service

[Service]
Type=oneshot
ExecStart=/etc/networkd-dispatcher/routable.d/50-tailscale-bypass

/etc/systemd/system/tailscale-bypass-rules.timer:

[Unit]
Description=Re-check Tailscale bypass rules every 30s

[Timer]
OnBootSec=10
OnUnitActiveSec=30

[Install]
WantedBy=timers.target

Enable it:

sudo systemctl daemon-reload
sudo systemctl enable --now tailscale-bypass-rules.timer

The ip rule add commands are idempotent (they silently fail if the rule already exists), so running every 30 seconds costs nothing when rules are already in place.

Verifying

# Check rules are present
ip -4 rule list | grep 5002

# Test container access
curl -s http://127.0.0.1:<your-port>/health

# Watch the timer
systemctl status tailscale-bypass-rules.timer

Why This Is Hard to Debug

  • Container healthchecks run inside the container, where localhost always works. They'll never catch this.
  • docker ps shows the container as running and healthy.
  • The issue is intermittent — it only happens when Tailscale refreshes its rules, which can be days or weeks apart.
  • TCP connects but HTTP hangs, so it doesn't look like a routing problem at first glance.

Other Traffic You Might Need to Bypass

If you're on an IPv6-only host using Tailscale's exit node for IPv4, you may also need bypasses for regional services:

# Example: Hetzner apt mirror
ip -6 rule add to 2a01:4ff:ff00::3:3 pref 5000 lookup main 2>/dev/null || true

Same principle: anything that should go direct instead of through the exit node needs a rule with priority < 5270.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment