Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active May 17, 2025 18:16
Show Gist options
  • Save scyto/67fdc9a517faefa68f730f82d7fa3570 to your computer and use it in GitHub Desktop.
Save scyto/67fdc9a517faefa68f730f82d7fa3570 to your computer and use it in GitHub Desktop.
Thunderbolt Networking Setup

Thunderbolt Networking

this gist is part of this series

you wil need proxmox kernel 6.2.16-14-pve or higher.

Load Kernel Modules

  • add thunderbolt and thunderbolt-net kernel modules (this must be done all nodes - yes i know it can sometimes work withoutm but the thuderbolt-net one has interesting behaviou' so do as i say - add both ;-)
    1. nano /etc/modules add modules at bottom of file, one on each line
    2. save using x then y then enter

Prepare /etc/network/interfaces

doing this means we don't have to give each thunderbolt a manual IPv6 addrees and that these addresses stay constant no matter what Add the following to each node using nano /etc/network/interfaces

If you see any sections called thunderbolt0 or thunderbol1 delete them at this point.

Create entries to prepopulate gui with reminder

Doing this means we don't have to give each thunderbolt a manual IPv6 or IPv4 addrees and that these addresses stay constant no matter what.

Add the following to each node using nano /etc/network/interfaces this to remind you not to edit en05 and en06 in the GUI

This fragment should go between the existing auto lo section and adapater sections.

iface en05 inet manual
#do not edit it GUI

iface en06 inet manual
#do not edit in GUI

If you see any thunderbol sections delete them from the file before you save it.

*DO NOT DELETE the source /etc/network/interfaces.d/* this will always exist on the latest versions and should be the last or next to last line in /interfaces file

Rename Thunderbolt Connections

This is needed as proxmox doesn't recognize the thunderbolt interface name. There are various methods to do this. This method was selected after trial and error because:

  • the thunderboltX naming is not fixed to a port (it seems to be based on sequence you plug the cables in)
  • the MAC address of the interfaces changes with most cable insertion and removale events
  1. use udevadm monitor command to find your device IDs when you insert and remove each TB4 cable. Yes you can use other ways to do this, i recommend this one as it is great way to understand what udev does - the command proved more useful to me than the syslog or lspci command for troublehsooting thunderbolt issues and behavious. In my case my two pci paths are 0000:00:0d.2and 0000:00:0d.3 if you bought the same hardware this will be the same on all 3 units. Don't assume your PCI device paths will be the same as mine.

  2. create a link file using nano /etc/systemd/network/00-thunderbolt0.link and enter the following content:

[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05
  1. create a second link file using nano /etc/systemd/network/00-thunderbolt1.link and enter the following content:
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06

Set Interfaces to UP on reboots and cable insertions

This section en sure that the interfaces will be brought up at boot or cable insertion with whatever settings are in /etc/network/interfaces - this shouldn't need to be done, it seems like a bug in the way thunderbolt networking is handled (i assume this is debian wide but haven't checked).

Huge thanks to @corvy for figuring out a script that should make this much much more reliable for most

  1. create a udev rule to detect for cable insertion using nano /etc/udev/rules.d/10-tb-en.rules with the following content:
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
  1. save the file

  2. create the first script referenced above using nano /usr/local/bin/pve-en05.sh and with the follwing content:

#!/bin/bash

LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en05"

echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"

# If multiple interfaces go up at the same time, 
# retry 10 times and break the retry when successful
for i in {1..10}; do
    echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
    /usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
        echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
        break
    }
  
    echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
    sleep 3
done

save the file and then

  1. create the second script referenced above using nano /usr/local/bin/pve-en06.sh and with the follwing content:
#!/bin/bash

LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en06"

echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"

# If multiple interfaces go up at the same time, 
# retry 10 times and break the retry when successful
for i in {1..10}; do
    echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
    /usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
        echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
        break
    }
  
    echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
    sleep 3
done

and save the file

  1. make both scripts executable with chmod +x /usr/local/bin/*.sh
  2. run update-initramfs -u -k all to propogate the new link files into initramfs
  3. Reboot (restarting networking, init 1 and init 3 are not good enough, so reboot)

Enabling IP Connectivity

proceed to the next gist

Slow Thunderbolt Performance? Too Many Retries? No traffic? Try this!

verify neighbors can see each other (connectivity troubleshooting)

##3 Install LLDP - this is great to see what nodes can see which.

  • install lldpctl with apt install lldpd on all 3 nodes
  • execute lldpctl you should info

make sure iommu is enabled (speed troubleshooting)

if you are having speed issues make sure the following is set on the kernel command line in /etc/default/grub file intel_iommu=on iommu=pt one set be sure to run update-grub and reboot

everyones grub command line is different this is mine because i also have i915 virtualization, if you get this wrong you can break your machine, if you are not doing that you don't need the i915 entries you see below

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt" (note if you have more things in your cmd line DO NOT REMOVE them, just add the two intel ones, doesnt matter where.

Pinning the Thunderbolt Driver (speed and retries troubleshooting)

identify you P and E cores by running the following

cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus

you should get two lines on an intel system with P and E cores. first line should be your P cores second line should be your E cores

for example on mine:

root@pve1:/etc/pve# cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
0-7
8-15

create a script to apply affinity settings everytime a thunderbolt interface comes up

  1. make a file at /etc/network/if-up.d/thunderbolt-affinity
  2. add the following to it - make sure to replace echo X-Y with whatever the report told you were your performance cores - e.g. echo 0-7
#!/bin/bash

# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
# Set Thunderbot affinity to Pcores
    grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo X-Y | tee "/proc/irq/{}/smp_affinity_list"'
fi
  1. save the file - done

Extra Debugging for Thunderbolt

dynamic kernel tracing - adds more info to dmesg, doesn't overhwelm dmesg

I have only tried this on 6.8 kernels, so YMMV If you want more TB messages in dmesg to see why connection might be failing here is how to turn on dynamic tracing

For bootime you will need to add it to the kernel command line by adding thunderbolt.dyndbg=+p to your /etc/default/grub file, running update-grub and rebooting.

To expand the example above"

`GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt thunderbolt.dyndbg=+p"`  

Don't forget to run update-grub after saving the change to the grub file.

For runtime debug you can run the following command (it will revert on next boot) so this cant be used to cpature what happens at boot time.

`echo -n 'module thunderbolt =p' > /sys/kernel/debug/dynamic_debug/control`

install tbtools

these tools can be used to inspect your thundebolt system, note they rely on rust to be installedm you must use the rustup script below and not intsall rust by package manager at this time (9/15/24)

apt install pkg-config libudev-dev git curl
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/intel/tbtools
restart you ssh session
cd tbtools
cargo install --path .
@DarkPhyber-hg
Copy link

DarkPhyber-hg commented May 14, 2025

Sharing a shower-thought. I have not tested this yet, but will once i get a stable system. I was previously running the powersave governor on all cores, until i get a stable system i have all cores set to performance. I have seen more drops in thunderbolt-networking with powersave, but felt it was an acceptable tradeoff. I know others have had similar findings. Since I am considering pinning each IRQ to a specific core, I wonder if we can mix cores on the performance governor. Assign thunderbolt to specific cores and then, run those cores with the performance governor, and set all the other cores to powersave. Maybe something someone with a stable system wants to try.

I was lazy and just used chatgpt to write the scripts

identify available cpu governors:
for file in /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_governors; do
  cpu=$(basename $(dirname $(dirname $file)))
  echo -n "$cpu: "
  cat "$file"
done


cpu0: performance powersave
cpu10: performance powersave
cpu11: performance powersave
cpu12: performance powersave
cpu13: performance powersave
cpu14: performance powersave
cpu15: performance powersave
cpu16: performance powersave
cpu17: performance powersave
cpu18: performance powersave
cpu19: performance powersave
cpu1: performance powersave
cpu2: performance powersave
cpu3: performance powersave
cpu4: performance powersave
cpu5: performance powersave
cpu6: performance powersave
cpu7: performance powersave
cpu8: performance powersave
cpu9: performance powersave

Verify currently active cpu governors:
for file in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do   cpu=$(basename $(dirname $(dirname $file)));   echo -n "$cpu: ";   cat "$file"; done

cpu0: performance
cpu10: performance
cpu11: performance
cpu12: performance
cpu13: performance
cpu14: performance
cpu15: performance
cpu16: performance
cpu17: performance
cpu18: performance
cpu19: performance
cpu1: performance
cpu2: performance
cpu3: performance
cpu4: performance
cpu5: performance
cpu6: performance
cpu7: performance
cpu8: performance
cpu9: performance


then just change the value of /sys/devices/system/cpu/cpu<X>/cpufreq/scaling_governor to whichever availible governor you want to use for each core <X> independently. I have no idea how the system would behave if a hyperthreading core had 1 logical core set to perf and the other set to powersave, i imagine strange things would occur.

example might be something like this, i beleive brace expansion should work here:

echo "performance" >  /sys/devices/system/cpu/cpu[0-7]/cpufreq/scaling_governor
echo "powersave" >  /sys/devices/system/cpu/cpu[8-19]/cpufreq/scaling_governor

EDIT: I tested this yesterday. It made virtually no difference in power usage ( my pdu measures power draw per outlet) on Proxmox opt-in kernel 6.14.0-2 and all aspm disabled, but it did still cause a significant increase in dropped packets. I don't feel like it's worth experimenting with any further.

@DarkPhyber-hg
Copy link

DarkPhyber-hg commented May 17, 2025

spamming another update. while i haven't had a lockup of my vm after the last changes i made, i'm still looking to improve the retries and these ceph aio_submit retry messages in my system log.

I've been doing a bunch of testing since i still think it stems from packet loss. I've found no appreciable difference messing with kernel level settings for tcp window size, net.core.rmem_max and wmem_max and a few other kernel level settings. In fact i often made things worse. I also tried to disable offloading on the thunderbolt interfaces, it made performance worse, but i didn't methodically try different offloading combinations, i have seen some improvements on physical nic's with disabling only specific offloading parameters.

At this point i'm thinking either there's some kind of issue with flow control not working right, or the thunderbolt controller just can't keep up and is dropping/corrupting data when it's loaded bidirectionally. Why do i say corrupting? because looking at interface stats using ip link i'm also seeing crc errors. I see crc errors on all of my nodes, i'm using certified owc tb4 cables and i even tried an expensive active apple thunderbolt 4 cable, which rules out bad cables.

root@pve2:~# ip -s -s link show en05
10: en05: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc cake state UP mode DEFAULT group default qlen 1000
    link/ether 02:ea:fb:06:bf:19 brd ff:ff:ff:ff:ff:ff
    RX:    bytes  packets errors dropped  missed   mcast           
    215534087648 67161931    514       0     480       0 
    RX errors:     length    crc   frame    fifo overrun
                        2     32       0       0       0 
    TX:    bytes  packets errors dropped carrier collsns           
    300539920795 16346041      0       0       0       0 
    TX errors:    aborted   fifo  window heartbt transns
                        0      0       0       0       2 

I decided to mess around with the queueing discipline (qdisc) first thinking it might be a flow control issue. On my machine the thunderbolt interfaces default to a qdisc of pfifo_fast in my testing this has the highest retries. I found arguably less retries with pfifo. Enough of an improvement with fq to say it's not within the range of error, and a significant improvement with fq_codel. I found on average a 60-70% reduction in retries with fq_codel and with "iperf3 --bidir" a bidirectional 25-26gbps. I was still getting some packet drops on the interfaces, but as long as the application layer wasn't getting pissed off I'm not sure i care that much.

I wanted to take it a step further now since with ceph both en05 and en06 could be loaded concurrently. So i ran 2 x iperf3 servers on a node on different ports, for example say PVE3. Then i ran a bidirectional iperf3's from both PVE2 and PVE4 at the same time to PVE3. The idea being to try and load both thunderbolt ports on PVE3 to see what happens. I immediately saw significantly reduced performance and increased retries. When i was running the iperf3 on a single machine i was seeing 25-26gbps both ways, but when both machines were hitting PVE3 throughput dropped off, was kind of asymetric for example i saw something like 14gbps/18gbps. I verified this with different nodes running the server each time.

Now i remember a post earlier from @razqqm using tc qdisc's to rate limit, so i tried a few rate limiting qdisc's. I tried "cake" and "tbf w/ fq_codel", i didn't try hbt as @razqqm used. Cake is much easier to configure, but I thought maybe tbf w/ fq_codel might perform better, since fq_codel performed better on it's own. I experimented with both of them at different bandwidth limits, on my 13900h's 15gbps seemed to be about the sweet spot when loading en05 and en06 at the same time and seeing minimal retries. I didn't see any significant difference in performance between the two of them, so i implemented cake. I still get retries and packet loss if both interfaces are loaded, but significantly less. Also significantly less packet loss and crc errors in production.

In production I'm still getting some of the ceph aio_submit retry messages in my system logs; however both are significantly reduced. I'm hopeful I can resolve these damn lockups, especially since i'm going on vacation in a week. I'm still trying to isolate a few more possible causes. But i'm hopeful other may find my multi-post novel here helpful.

to set the qdisc on boot create a file in /etc/network/if-up.d/, i called mine set-qdisc

vi /etc/network/if-up.d/set-qdisc

#!/bin/bash

# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
   tc qdisc del dev $IFACE root
   tc qdisc replace dev $IFACE root cake   bandwidth 15gbit 
fi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment