Last active
March 24, 2025 15:14
-
-
Save bburky/f6d8c1106e363cb4f8c85a7b2bdaea1f to your computer and use it in GitHub Desktop.
Run the Tailscale Docker container on a Mikrotik router
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Run the Tailscale Docker container on a Mikrotik router | |
# Based on Mikrotik container documentation: | |
# https://help.mikrotik.com/docs/display/ROS/Container | |
# Tailscale container documentation: | |
# https://hub.docker.com/r/tailscale/tailscale | |
# Tested on an hAP AX^3 with RouterOS 7.7 | |
### Install and enable container mode on the router | |
# This section only needs to be run once to configure the router. | |
# 1. Download "Extra packages" for your architecture and OS version at https://mikrotik.com/download | |
# 2. Upload container-*-arm64.npk to the router | |
# 3. Reboot to install the npk: | |
# /system reboot | |
# 4. Enable container mode: | |
# /system/device-mode/update container=yes | |
# Press reset on the router to confirm enabling container mode | |
# Add veth interface for the container: | |
/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1 | |
# Create a bridge for containers and add veth to it: | |
# (on RouterOS 7.7 the "dockers" bridge already existed, creating the bridge was not needed) | |
# /interface/bridge/add name=dockers | |
/ip/address/add address=172.17.0.1/24 interface=dockers | |
/interface/bridge/port add bridge=dockers interface=veth1 | |
# Add the dockers bridge as a LAN interface to allow Tailscale clients to reach the router admin page | |
/interface/list/member/add list=LAN interface=dockers | |
# Setup NAT for outgoing traffic: | |
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24 | |
# Configure container mode settings | |
/container/config/set registry-url=https://registry-1.docker.io tmpdir=usb1/pull ram-high=200M | |
### Tailscale container setup | |
# Re-run commands in this section when reconfiguring or updating Tailscale. | |
# * Configure TS_AUTH_KEY with a valid key to authenticate | |
# Create key at https://login.tailscale.com/admin/settings/keys | |
# Reusble: no | |
# Ephemeral: no | |
# * State is stored on USB flash drive | |
# * Uses userspace networking (didn't try to pass in /dev/net/tun) | |
# * After initial authentication, make the following changes the Tailscale admin console for the "router" host: | |
# * Disable key expiry | |
# * Enable subnet routes (optional) | |
# * Enable exit node (optional) | |
/container/envs/remove [find name="tailscale_envs"] | |
# Setting PATH _really_ shouldn't be needed, this is a Mikrotik bug I think | |
/container/envs/add name=tailscale_envs key=PATH value="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" | |
/container/envs/add name=tailscale_envs key=TS_EXTRA_ARGS value="--hostname=router --advertise-exit-node" | |
/container/envs/add name=tailscale_envs key=TS_ROUTES value="192.168.88.0/24" | |
# Auth key for inital login into Tailscale, after the first login the state will be used to reauthenticate | |
/container/envs/add name=tailscale_envs key=TS_AUTH_KEY value="tskey-auth-REDACTED-REDACTED" | |
/container/envs/add name=tailscale_envs key=TS_STATE_DIR value="/var/lib/tailscale" | |
# Userspace networking will be used automatically because /dev/net/tun is unavailable | |
# /container/envs/add name=tailscale_envs key=TS_USERSPACE value="" | |
/container/mounts/remove [find name="tailscale_state"] | |
/container/mounts/add name=tailscale_state src=usb1/tailscale_state dst=/var/lib/tailscale | |
# Does not work: mounting /dev/net/tun prevents the container from starting (also, adding NET_ADMIN and NET_RAW does not seem possible) | |
# /container/mounts/add name=tailscale_tun src=/dev/net/tun dst=/dev/net/tun | |
# Create the container, referencing the mounts and environment variables above | |
/container/remove [find hostname="tailscale"] | |
/container/add remote-image=tailscale/tailscale:latest interface=veth1 root-dir=usb1/tailscale mounts=tailscale_state envlist=tailscale_envs hostname=tailscale start-on-boot=yes logging=yes | |
# The container's status will initially be status=extracting while the image is pulled | |
# There's not a great way to wait for the extraction to complete, so loop until it's status=stopped with a 60s timeout | |
# Then start the container | |
# The container will automatically start when the router reboots | |
{ | |
:local break false; | |
:local i 0; | |
:local container [/container/find hostname="tailscale"] | |
:while (!$break) do={ | |
:set i ($i + 1); | |
:delay 5s; | |
:local containerStatus [/container/get $container status]; | |
:log info "waiting for container to extract, status: $containerStatus"; | |
:if ($containerStatus = "stopped") do={:set break true;} | |
:if ($i=12) do={:set break true;} | |
} | |
/container/start $container | |
} | |
# TODO: If the container dies, it appears to not restart automatically | |
# Is a (scheduled?) script to restart it needed? | |
### Miscelaneous commands | |
# Check container status, current tag, etc | |
# /container/print | |
# Start container | |
# Make sure container has been added and is status=stopped by using /container/print | |
# /container/start [find hostname="tailscale"] | |
# Shell | |
# /container/shell [find hostname="tailscale"] | |
# There is no pull/update command, deleting and recreating the container is needed to re-pull the image | |
# The image is cached once when /container/add runs and will not be updated across restarts | |
# Cleanup | |
# /container/stop [find hostname="tailscale"] | |
# /container/remove [find hostname="tailscale"] | |
# /container/envs/remove [find name="tailscale_envs"] | |
# /container/mounts/remove [find name="tailscale_mounts"] | |
# /file/remove usb1/tailscale_state |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I was only able to get this to work after changing the docker image source in line 39 from
to
Without that I was getting error 400 constantly.