A server with two interfaces on the same subnet was binding iperf3 to each address, but all return traffic used the lower-metric interface. This caused asymmetric routing and capped each flow at ~470 Mbps, even though each NIC could do ~940+ Mbps independently.
- Interfaces & Metrics
enP3p49s0
→ 10.0.7.6 (metric 100)enP4p65s0
→ 10.0.7.5 (metric 101)
- Default Routes
Both point at 10.0.4.1, but traffic prefersenP3p49s0
due to the lower metric.
Use Policy-Based Routing (PBR) to bind each source IP to its own routing table and interface, ensuring correct return-path selection.
-
Define Custom Routing Tables
echo "100 rt_enP3p49s0" | sudo tee -a /etc/iproute2/rt_tables echo "101 rt_enP4p65s0" | sudo tee -a /etc/iproute2/rt_tables
-
Create Rules & Routes for
enP3p49s0
(10.0.7.6)sudo ip rule add from 10.0.7.6 table rt_enP3p49s0 sudo ip route add 10.0.4.0/22 dev enP3p49s0 src 10.0.7.6 table rt_enP3p49s0 sudo ip route add default via 10.0.4.1 dev enP3p49s0 table rt_enP3p49s0
-
Create Rules & Routes for
enP4p65s0
(10.0.7.5)sudo ip rule add from 10.0.7.5 table rt_enP4p65s0 sudo ip route add 10.0.4.0/22 dev enP4p65s0 src 10.0.7.5 table rt_enP4p65s0 sudo ip route add default via 10.0.4.1 dev enP4p65s0 table rt_enP4p65s0
-
Verify & Re-Test
- Restart your
iperf3
servers:iperf3 -s -B 10.0.7.6 iperf3 -s -B 10.0.7.5
- Run clients simultaneously and confirm ~940 Mbps on both flows.
- Restart your
By ensuring each source IP uses its own routing table, return traffic now follows the same interface it arrived on, restoring full-duplex TCP performance.