Skip to content

Instantly share code, notes, and snippets.

@arevindh
Last active May 21, 2025 09:03
Show Gist options
  • Save arevindh/1e5c3ba5c3a286c521fddd9aaddcb1c7 to your computer and use it in GitHub Desktop.
Save arevindh/1e5c3ba5c3a286c521fddd9aaddcb1c7 to your computer and use it in GitHub Desktop.
Rock 5 ITX optimizations : Samba and Dual 2.5Gbe port.
# /etc/sysctl.d/99-nas-network.conf
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.tcp_wmem = 4096 87380 33554432
net.ipv4.tcp_window_scaling = 1
#vm.swappiness=10
#vm.vfs_cache_pressure=50

Fixing Asymmetric Routing & TCP Throughput Issues with Policy-Based Routing

Problem

A server with two interfaces on the same subnet was binding iperf3 to each address, but all return traffic used the lower-metric interface. This caused asymmetric routing and capped each flow at ~470 Mbps, even though each NIC could do ~940+ Mbps independently.

  • Interfaces & Metrics
    • enP3p49s0 → 10.0.7.6 (metric 100)
    • enP4p65s0 → 10.0.7.5 (metric 101)
  • Default Routes
    Both point at 10.0.4.1, but traffic prefers enP3p49s0 due to the lower metric.

Solution

Use Policy-Based Routing (PBR) to bind each source IP to its own routing table and interface, ensuring correct return-path selection.

Steps

  1. Define Custom Routing Tables

    echo "100 rt_enP3p49s0" | sudo tee -a /etc/iproute2/rt_tables
    echo "101 rt_enP4p65s0" | sudo tee -a /etc/iproute2/rt_tables
  2. Create Rules & Routes for enP3p49s0 (10.0.7.6)

    sudo ip rule add from 10.0.7.6 table rt_enP3p49s0
    sudo ip route add 10.0.4.0/22 dev enP3p49s0 src 10.0.7.6 table rt_enP3p49s0
    sudo ip route add default via 10.0.4.1 dev enP3p49s0 table rt_enP3p49s0
  3. Create Rules & Routes for enP4p65s0 (10.0.7.5)

    sudo ip rule add from 10.0.7.5 table rt_enP4p65s0
    sudo ip route add 10.0.4.0/22 dev enP4p65s0 src 10.0.7.5 table rt_enP4p65s0
    sudo ip route add default via 10.0.4.1 dev enP4p65s0 table rt_enP4p65s0
  4. Verify & Re-Test

    • Restart your iperf3 servers:
      iperf3 -s -B 10.0.7.6
      iperf3 -s -B 10.0.7.5
    • Run clients simultaneously and confirm ~940 Mbps on both flows.

By ensuring each source IP uses its own routing table, return traffic now follows the same interface it arrived on, restoring full-duplex TCP performance.

# /etc/samba/smb.conf
[global]
server min protocol = SMB2
server max protocol = SMB3
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=262144 SO_SNDBUF=262144
acl allow execute always = true
acl map full control = yes
deadtime = 60
getwd cache = true
min receivefile size = 16384
strict sync = no
sync always = no
use sendfile = true
aio write size = 1
# Rest of config
# /etc/sysctl.conf
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.tcp_wmem = 4096 87380 33554432
net.ipv4.tcp_window_scaling = 1
#vm.swappiness=10
#vm.vfs_cache_pressure=50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment