Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active July 19, 2025 18:59
Show Gist options
  • Save scyto/67fdc9a517faefa68f730f82d7fa3570 to your computer and use it in GitHub Desktop.
Save scyto/67fdc9a517faefa68f730f82d7fa3570 to your computer and use it in GitHub Desktop.
Thunderbolt Networking Setup

Thunderbolt Networking

this gist is part of this series

you wil need proxmox kernel 6.2.16-14-pve or higher.

Load Kernel Modules

  • add thunderbolt and thunderbolt-net kernel modules (this must be done all nodes - yes i know it can sometimes work withoutm but the thuderbolt-net one has interesting behaviou' so do as i say - add both ;-)
    1. nano /etc/modules add modules at bottom of file, one on each line
    2. save using x then y then enter

Prepare /etc/network/interfaces

doing this means we don't have to give each thunderbolt a manual IPv6 addrees and that these addresses stay constant no matter what Add the following to each node using nano /etc/network/interfaces

If you see any sections called thunderbolt0 or thunderbol1 delete them at this point.

Create entries to prepopulate gui with reminder

Doing this means we don't have to give each thunderbolt a manual IPv6 or IPv4 addrees and that these addresses stay constant no matter what.

Add the following to each node using nano /etc/network/interfaces this to remind you not to edit en05 and en06 in the GUI

This fragment should go between the existing auto lo section and adapater sections.

iface en05 inet manual
#do not edit it GUI

iface en06 inet manual
#do not edit in GUI

If you see any thunderbol sections delete them from the file before you save it.

*DO NOT DELETE the source /etc/network/interfaces.d/* this will always exist on the latest versions and should be the last or next to last line in /interfaces file

Rename Thunderbolt Connections

This is needed as proxmox doesn't recognize the thunderbolt interface name. There are various methods to do this. This method was selected after trial and error because:

  • the thunderboltX naming is not fixed to a port (it seems to be based on sequence you plug the cables in)
  • the MAC address of the interfaces changes with most cable insertion and removale events
  1. use udevadm monitor command to find your device IDs when you insert and remove each TB4 cable. Yes you can use other ways to do this, i recommend this one as it is great way to understand what udev does - the command proved more useful to me than the syslog or lspci command for troublehsooting thunderbolt issues and behavious. In my case my two pci paths are 0000:00:0d.2and 0000:00:0d.3 if you bought the same hardware this will be the same on all 3 units. Don't assume your PCI device paths will be the same as mine.

  2. create a link file using nano /etc/systemd/network/00-thunderbolt0.link and enter the following content:

[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05
  1. create a second link file using nano /etc/systemd/network/00-thunderbolt1.link and enter the following content:
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06

Set Interfaces to UP on reboots and cable insertions

This section en sure that the interfaces will be brought up at boot or cable insertion with whatever settings are in /etc/network/interfaces - this shouldn't need to be done, it seems like a bug in the way thunderbolt networking is handled (i assume this is debian wide but haven't checked).

Huge thanks to @corvy for figuring out a script that should make this much much more reliable for most

  1. create a udev rule to detect for cable insertion using nano /etc/udev/rules.d/10-tb-en.rules with the following content:
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
  1. save the file

  2. create the first script referenced above using nano /usr/local/bin/pve-en05.sh and with the follwing content:

#!/bin/bash

LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en05"

echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"

# If multiple interfaces go up at the same time, 
# retry 10 times and break the retry when successful
for i in {1..10}; do
    echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
    /usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
        echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
        break
    }
  
    echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
    sleep 3
done

save the file and then

  1. create the second script referenced above using nano /usr/local/bin/pve-en06.sh and with the follwing content:
#!/bin/bash

LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en06"

echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"

# If multiple interfaces go up at the same time, 
# retry 10 times and break the retry when successful
for i in {1..10}; do
    echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
    /usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
        echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
        break
    }
  
    echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
    sleep 3
done

and save the file

  1. make both scripts executable with chmod +x /usr/local/bin/*.sh
  2. run update-initramfs -u -k all to propogate the new link files into initramfs
  3. Reboot (restarting networking, init 1 and init 3 are not good enough, so reboot)

Enabling IP Connectivity

proceed to the next gist

Slow Thunderbolt Performance? Too Many Retries? No traffic? Try this!

verify neighbors can see each other (connectivity troubleshooting)

##3 Install LLDP - this is great to see what nodes can see which.

  • install lldpctl with apt install lldpd on all 3 nodes
  • execute lldpctl you should info

make sure iommu is enabled (speed troubleshooting)

if you are having speed issues make sure the following is set on the kernel command line in /etc/default/grub file intel_iommu=on iommu=pt one set be sure to run update-grub and reboot

everyones grub command line is different this is mine because i also have i915 virtualization, if you get this wrong you can break your machine, if you are not doing that you don't need the i915 entries you see below

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt" (note if you have more things in your cmd line DO NOT REMOVE them, just add the two intel ones, doesnt matter where.

Pinning the Thunderbolt Driver (speed and retries troubleshooting)

identify you P and E cores by running the following

cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus

you should get two lines on an intel system with P and E cores. first line should be your P cores second line should be your E cores

for example on mine:

root@pve1:/etc/pve# cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
0-7
8-15

create a script to apply affinity settings everytime a thunderbolt interface comes up

  1. make a file at /etc/network/if-up.d/thunderbolt-affinity
  2. add the following to it - make sure to replace echo X-Y with whatever the report told you were your performance cores - e.g. echo 0-7
#!/bin/bash

# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
# Set Thunderbot affinity to Pcores
    grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo X-Y | tee "/proc/irq/{}/smp_affinity_list"'
fi
  1. save the file - done

Extra Debugging for Thunderbolt

dynamic kernel tracing - adds more info to dmesg, doesn't overhwelm dmesg

I have only tried this on 6.8 kernels, so YMMV If you want more TB messages in dmesg to see why connection might be failing here is how to turn on dynamic tracing

For bootime you will need to add it to the kernel command line by adding thunderbolt.dyndbg=+p to your /etc/default/grub file, running update-grub and rebooting.

To expand the example above"

`GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt thunderbolt.dyndbg=+p"`  

Don't forget to run update-grub after saving the change to the grub file.

For runtime debug you can run the following command (it will revert on next boot) so this cant be used to cpature what happens at boot time.

`echo -n 'module thunderbolt =p' > /sys/kernel/debug/dynamic_debug/control`

install tbtools

these tools can be used to inspect your thundebolt system, note they rely on rust to be installedm you must use the rustup script below and not intsall rust by package manager at this time (9/15/24)

apt install pkg-config libudev-dev git curl
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/intel/tbtools
restart you ssh session
cd tbtools
cargo install --path .
@mattyjew
Copy link

This ensure it gets set everytime the if en05 or en06 goes up or down. Including cable connect / disconnect. I prefer this over rc.local. Should the device change IRQ then the rc.local approach will fail. Not sure who suggested this approach, maybe it was @nickglott but I cannot remember. At least doing it this way is very robust and would be my suggestion.

Thanks, i was also contemplating telling folks to add it to the user crontab using the crontab -e command with at @daily but if this needs to be done each time the driver is loaded thats also a bust too. I agree you way looks robust - which i think is key.

its also wild to me i just don't get the issue... this is between two of my two nodes, i have never set affinity, i would love to understand why the difference occurs....

Connecting to host fc00::81, port 5201
[  5] local fc00::82 port 38314 connected to fc00::81 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  3.05 GBytes  26.2 Gbits/sec   28   3.06 MBytes       
[  5]   1.00-2.00   sec  3.12 GBytes  26.8 Gbits/sec    3   2.81 MBytes       
[  5]   2.00-3.00   sec  3.09 GBytes  26.6 Gbits/sec   31   3.87 MBytes       
[  5]   3.00-4.00   sec  3.12 GBytes  26.8 Gbits/sec    0   3.87 MBytes       
[  5]   4.00-5.00   sec  3.12 GBytes  26.8 Gbits/sec    8   2.81 MBytes       
[  5]   5.00-6.00   sec  3.10 GBytes  26.7 Gbits/sec    1   3.81 MBytes       
[  5]   6.00-7.00   sec  3.11 GBytes  26.7 Gbits/sec    0   3.81 MBytes       
[  5]   7.00-8.00   sec  3.11 GBytes  26.7 Gbits/sec    0   3.81 MBytes       
[  5]   8.00-9.00   sec  3.09 GBytes  26.6 Gbits/sec    0   3.81 MBytes       
[  5]   9.00-10.00  sec  3.10 GBytes  26.6 Gbits/sec    1   3.81 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  31.0 GBytes  26.6 Gbits/sec   72             sender
[  5]   0.00-10.00  sec  31.0 GBytes  26.6 Gbits/sec                  receiver

out of interest what is you smp_affinity setting, is it ffff?

root@pve2:~# cat /proc/irq/129/smp_affinity
ffff
root@pve2:~# cat /proc/irq/129/smp_affinity_list
0-15

Hi, I'm getting that cat response but not the stable bitrate and retries. My transfer rate is all over the place from 8-17Gbits and hundreds of retries. I'm running intel Nuc12's in a 3 node mesh. Followed the Gist to the T and everything talks but just not stable still. I had it running stable previously but had to do a reinstall (my stuff up). Following it this time was certainly easier but I've come unstuck now.

@mattyjew
Copy link

A little bit more info and a small win, I get good stable 26Gbits from node 2 back to node 1, and node 3 back to node 2. But not in other directions. Interestingly too, I get the following on node 1 sometimes but not all the time. Its like a number of directories are missing in that irq set including 129 compared to the other 2 nodes:

root@Water:~# cat /proc/irq/129/smp_affinity
cat: /proc/irq/129/smp_affinity: No such file or directory

All 3 NUC's are the same make and model, P cores are 0-7.

@mattyjew
Copy link

Managed to fix the random no affinity file directory and running lldpctl now on each node shows all neighbore nodes correctly. I needed to add auto-hotplug en05 and auto-hotplug en06 to my interfaces file on all three nodes. Now all three are consistantly coming up, just need to get them all stable at 26G and minimum retries. It looks like it works on some nodes some of the time, but not all 3. I'm running NUC12 Pros (1 intel 2 asus versions).

@mattyjew
Copy link

And got the P core script to work. Needed to run: chmod +x /etc/network/if-up.d/thunderbolt-affinity after setting the affinity script. Once done when I run cat /proc/irq/129/smp_affinity I get 00ff for the 0-7 cores instead of ffff indicating all cores. Thanks Gemma27b local AI!

#noob-to-linux

@michaeleberhardt
Copy link

Hey Folks,
I followed the guide (thanks a lo!!) completely and my 3-node MS-01 Cluster ran fine for weeks..
Today I discovered, that thunderbold networking got incredibly slow. Just between 2-10mbps, no matter between which nodes..
Affinity etc is all fine.. did anybody face that problem before? I am on the latest proxmox kernel..

thanks a lot & best regards,
Michael

@Allistah
Copy link

@michaeleberhardt - Roll back to kernel 6.8.12-1-pve and see if the issue goes away. I've had problems with later kernels so I've stuck with this one. I just checked and my cluster has been up for 125 days and is still rockin' 26Gb/s to all nodes.

@michaeleberhardt
Copy link

@Allistah - Thanks, I rolled back to 6.8.12-1-pve, unfortunately no change:

root@node1:~# uname -r 6.8.12-1-pve

root@node1:~# iperf3 -c 172.16.0.2
Connecting to host 172.16.0.2, port 5201
[  5] local 172.16.0.1 port 56376 connected to 172.16.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   608 KBytes  4.98 Mbits/sec   44   2.83 KBytes       
[  5]   1.00-2.00   sec   956 KBytes  7.83 Mbits/sec   44   2.83 KBytes       
[  5]   2.00-3.00   sec   157 KBytes  1.29 Mbits/sec   24   2.83 KBytes       
[  5]   3.00-4.00   sec   472 KBytes  3.87 Mbits/sec   40   2.83 KBytes       
[  5]   4.00-5.00   sec   160 KBytes  1.31 Mbits/sec   28   2.83 KBytes       
[  5]   5.00-6.00   sec   481 KBytes  3.94 Mbits/sec   28   2.83 KBytes       
[  5]   6.00-7.00   sec   478 KBytes  3.92 Mbits/sec   34   2.83 KBytes       
[  5]   7.00-8.00   sec   479 KBytes  3.93 Mbits/sec   34   2.83 KBytes       
[  5]   8.00-9.00   sec  1.32 MBytes  11.1 Mbits/sec   39   7.07 KBytes       
[  5]   9.00-10.00  sec   474 KBytes  3.88 Mbits/sec   38   2.83 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  5.49 MBytes  4.60 Mbits/sec  353             sender
[  5]   0.00-10.00  sec  5.37 MBytes  4.50 Mbits/sec                  receiver

iperf Done.
root@node1:~# 

Any help is very appreciated :-)
Best regards!
Michael

@michaeleberhardt
Copy link

Okay, I found a solution..
Don´t ask me why, but from the start it worked without setting a MTU explicitly.
Now I set MTU to 65520 and it works at about 24-26Gbps..
So if anybody faces a similiar issue, check MTU.
btw: it works on Kernel 6.8.12-12-pve.

Best regards!
Michael

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment