- I faced bandwidth issues between a WG Peer and a WG server. Download bandwidth when downloading from WG Server to WG peer was reduced significantly and upload bandwidth was practically non existent.
- I found a few reddit posts that said that we need to choose the right MTU. So I wrote a script to find an optimal MTU.
- Ideally I would have liked to have run all possible MTU configurations for both WG Server and WG Peer but for simplicity I choose to fix the WG Server to the original 1420 MTU and tried all MTUs from 1280 to 1500 for the WG Peer.
- On WG server, I started an
iperf3
server - On WG peer, I wrote a script that does the following:
wg-quick down wg0
- Edit MTU in the
/etc/wireguard/wg0.conf
file wg-quick up wg0
iperf3 -c 172.123.0.1 -J -t 5 -i 5
- This runs an
iperf3
client that connects to172.123.0.1
which is the WG Server gateway - The
iperf3
client runs for 5 seconds and stops and dumps a JSON output
- This runs an
- Max bandwidth provided by my ISP (1000Mbps Download, 50Mbps Upload)
- WG Server is a VPS running Ubuntu 20.04 on a cloud provider.
- WG Peer is a PC running Ubuntu 20.04 locally at home.
I followed this tutorial to setup my Wireguard configurations.
WG-server
# /etc/wireguard/wg0.conf
[Interface]
Address = 172.123.0.1/24
MTU = 1420
SaveConfig = true
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o eth0 -j MASQUERADE; iptables -t nat -A POSTROUTING -o ens10 -j MASQUERADE; ip6tables -t nat -A POSTROUTING -o ens10 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; iptables -t nat -D POSTROUTING -o ens10 -j MASQUERADE; ip6tables -t nat -D POSTROUTING -o ens10 -j MASQUERADE
ListenPort = 51820
PrivateKey = xxxxxxxxxxxxxxxxxx
[Peer]
PublicKey = xxxxxxxxxxxxxxxxxx
AllowedIPs = 172.123.0.2/32
Endpoint = X.X.X.X:61426
WG-peer
# /etc/wireguard/wg0.conf
[Interface]
PrivateKey = xxxxxxxxxxxxxxxxxx
ListenPort = 51820
Address = 172.123.0.2/24
MTU = 1384
[Peer]
PublicKey = xxxxxxxxxxxxxxxxxx
AllowedIPs = 172.123.0.0/24, 10.1.0.0/24
Endpoint = Y.Y.Y.Y:51820
PersistentKeepalive = 5
- As you can see in the image, the original MTU setting of 1420 for both peer and server gives abysmal bandwdith.
- I found that that MTU 1384 on the WG peer with 1420 on the WG server seems to almost have the best bandwidth.
- For WG Peer MTU 1384, the max upload bandwidth of 50Mbps of my ISP connection is achieved but I was only able to hit 550 Mbps for the download bandwidth where the max download bandwidth of my ISP connection is 1000 Mbps. This reduction in download bandwidth might be due to other factors but 550 Mbps was sufficient for my use cases so I stopped testing it further.
In case any one has any explanations for this behavior or have found some mistakes in my configurations or tests, please let me know.
This thread helped me. Thank you to everyone above.
I have a wireguard server that forwards incoming traffic on selected ports to a lone wireguard client. The server performs DNAT to route the packets to the client. I want the client to know who the packet came from, so the server does not perform SNAT. I use policy-based routing on the client to route responses back through the wireguard tunnel.
I discovered an issue with dropped packets during an HTTP upload to a server running on the wireguard client. I used
tcpdump
to capture traffic on the wireguard server'seth0
andwg0
interfaces anddropwatch
to inspect packet drop reasons while testing the upload. I noticed severalPKT_TOO_BIG
entries.I was encountering the same problem described in this page: https://www.roe.ac.uk/~hme/tcpoffload/index.shtml
Due to receive offload, the wireguard server's
eth0
interface was merging packets to a size above the route's MTU. The lowest MSS of the synchronized TCP socket was 1460 (40 bytes fewer than 1500, compensation for the IP header), so the wireguard server was attempting to forward packets 2960 bytes (1460 * 2 + 40, or sometimes larger) over an interface with a MTU size of 1420 (80 bytes fewer than 1500, compensation for the wireguard header.) Since the IP frames were flaggedDon't fragment
, the wireguard server dropped them instead of fragmenting them.I disabled offload optimization on the wireguard server's
eth0
andwg0
interfaces. Note that you may not need to disable all of these, and the optimal configuration will vary between the two interfaces.At this point the packets were no longer being merged, but they were still being dropped because they were 80 bytes too large and could not fit the wireguard header. As a hack, I configured the wireguard server to rewrite the MSS value of TCP SYN packets to
eth0
MTU (adjusting for IP header automatically.) Then I rewrote the MTU of theeth0
interface to allow theTCPMSS
target to clamp down to the right value.This will interfere with wireguard's MTU autodetection by setting the
wg0
interface MTU another 80 bytes lower when the interface is restarted, so the MTU should be fixed to the respective value (1420 in my case) in the wireguard config. This is a bit messy but works well enough for me. If someone comes up with a more elegant solution, please share it.