Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active September 24, 2025 14:53
Show Gist options
  • Save scyto/67fdc9a517faefa68f730f82d7fa3570 to your computer and use it in GitHub Desktop.
Save scyto/67fdc9a517faefa68f730f82d7fa3570 to your computer and use it in GitHub Desktop.
Thunderbolt Networking Setup

Thunderbolt Networking

this gist is part of this series

you wil need proxmox kernel 6.2.16-14-pve or higher.

Load Kernel Modules

  • add thunderbolt and thunderbolt-net kernel modules (this must be done all nodes - yes i know it can sometimes work withoutm but the thuderbolt-net one has interesting behaviou' so do as i say - add both ;-)
    1. nano /etc/modules add modules at bottom of file, one on each line
    2. save using x then y then enter

Prepare /etc/network/interfaces

doing this means we don't have to give each thunderbolt a manual IPv6 addrees and that these addresses stay constant no matter what Add the following to each node using nano /etc/network/interfaces

If you see any sections called thunderbolt0 or thunderbol1 delete them at this point.

Create entries to prepopulate gui with reminder

Doing this means we don't have to give each thunderbolt a manual IPv6 or IPv4 addrees and that these addresses stay constant no matter what.

Add the following to each node using nano /etc/network/interfaces this to remind you not to edit en05 and en06 in the GUI

This fragment should go between the existing auto lo section and adapater sections.

iface en05 inet manual
#do not edit it GUI

iface en06 inet manual
#do not edit in GUI

If you see any thunderbol sections delete them from the file before you save it.

*DO NOT DELETE the source /etc/network/interfaces.d/* this will always exist on the latest versions and should be the last or next to last line in /interfaces file

Rename Thunderbolt Connections

This is needed as proxmox doesn't recognize the thunderbolt interface name. There are various methods to do this. This method was selected after trial and error because:

  • the thunderboltX naming is not fixed to a port (it seems to be based on sequence you plug the cables in)
  • the MAC address of the interfaces changes with most cable insertion and removale events
  1. use udevadm monitor command to find your device IDs when you insert and remove each TB4 cable. Yes you can use other ways to do this, i recommend this one as it is great way to understand what udev does - the command proved more useful to me than the syslog or lspci command for troublehsooting thunderbolt issues and behavious. In my case my two pci paths are 0000:00:0d.2and 0000:00:0d.3 if you bought the same hardware this will be the same on all 3 units. Don't assume your PCI device paths will be the same as mine.

  2. create a link file using nano /etc/systemd/network/00-thunderbolt0.link and enter the following content:

[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05
  1. create a second link file using nano /etc/systemd/network/00-thunderbolt1.link and enter the following content:
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06

Set Interfaces to UP on reboots and cable insertions

This section en sure that the interfaces will be brought up at boot or cable insertion with whatever settings are in /etc/network/interfaces - this shouldn't need to be done, it seems like a bug in the way thunderbolt networking is handled (i assume this is debian wide but haven't checked).

Huge thanks to @corvy for figuring out a script that should make this much much more reliable for most

  1. create a udev rule to detect for cable insertion using nano /etc/udev/rules.d/10-tb-en.rules with the following content:
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
  1. save the file

  2. create the first script referenced above using nano /usr/local/bin/pve-en05.sh and with the follwing content:

#!/bin/bash

LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en05"

echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"

# If multiple interfaces go up at the same time, 
# retry 10 times and break the retry when successful
for i in {1..10}; do
    echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
    /usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
        echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
        break
    }
  
    echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
    sleep 3
done

save the file and then

  1. create the second script referenced above using nano /usr/local/bin/pve-en06.sh and with the follwing content:
#!/bin/bash

LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en06"

echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"

# If multiple interfaces go up at the same time, 
# retry 10 times and break the retry when successful
for i in {1..10}; do
    echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
    /usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
        echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
        break
    }
  
    echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
    sleep 3
done

and save the file

  1. make both scripts executable with chmod +x /usr/local/bin/*.sh
  2. run update-initramfs -u -k all to propogate the new link files into initramfs
  3. Reboot (restarting networking, init 1 and init 3 are not good enough, so reboot)

Enabling IP Connectivity

proceed to the next gist

Slow Thunderbolt Performance? Too Many Retries? No traffic? Try this!

verify neighbors can see each other (connectivity troubleshooting)

##3 Install LLDP - this is great to see what nodes can see which.

  • install lldpctl with apt install lldpd on all 3 nodes
  • execute lldpctl you should info

make sure iommu is enabled (speed troubleshooting)

if you are having speed issues make sure the following is set on the kernel command line in /etc/default/grub file intel_iommu=on iommu=pt one set be sure to run update-grub and reboot

everyones grub command line is different this is mine because i also have i915 virtualization, if you get this wrong you can break your machine, if you are not doing that you don't need the i915 entries you see below

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt" (note if you have more things in your cmd line DO NOT REMOVE them, just add the two intel ones, doesnt matter where.

Pinning the Thunderbolt Driver (speed and retries troubleshooting)

identify you P and E cores by running the following

cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus

you should get two lines on an intel system with P and E cores. first line should be your P cores second line should be your E cores

for example on mine:

root@pve1:/etc/pve# cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
0-7
8-15

create a script to apply affinity settings everytime a thunderbolt interface comes up

  1. make a file at /etc/network/if-up.d/thunderbolt-affinity
  2. add the following to it - make sure to replace echo X-Y with whatever the report told you were your performance cores - e.g. echo 0-7
#!/bin/bash

# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
# Set Thunderbot affinity to Pcores
    grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo X-Y | tee "/proc/irq/{}/smp_affinity_list"'
fi
  1. save the file - done

Extra Debugging for Thunderbolt

dynamic kernel tracing - adds more info to dmesg, doesn't overhwelm dmesg

I have only tried this on 6.8 kernels, so YMMV If you want more TB messages in dmesg to see why connection might be failing here is how to turn on dynamic tracing

For bootime you will need to add it to the kernel command line by adding thunderbolt.dyndbg=+p to your /etc/default/grub file, running update-grub and rebooting.

To expand the example above"

`GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt thunderbolt.dyndbg=+p"`  

Don't forget to run update-grub after saving the change to the grub file.

For runtime debug you can run the following command (it will revert on next boot) so this cant be used to cpature what happens at boot time.

`echo -n 'module thunderbolt =p' > /sys/kernel/debug/dynamic_debug/control`

install tbtools

these tools can be used to inspect your thundebolt system, note they rely on rust to be installedm you must use the rustup script below and not intsall rust by package manager at this time (9/15/24)

apt install pkg-config libudev-dev git curl
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/intel/tbtools
restart you ssh session
cd tbtools
cargo install --path .
@scyto
Copy link
Author

scyto commented Aug 18, 2025

I wonder if this is something that we should do (disable CPU C-States) on the NUC 13 Pros as well?

i would only say do this if you are not getting the 26Gbps in iperf3 tests

@scyto
Copy link
Author

scyto commented Aug 18, 2025

Can please more people share their experiences with upgrade from 8 to 9? What steps to take, things to watch out for and how to fix/remediate?

follow the instructions carefully
watch out for apt sources issues, they instructions will leave you with some bookworm entries and likely some duplicates - juts remove them
and disable any frr.service restart commands you have tied to to interfaces coming up - which most people here have.....

@ssavkar
Copy link

ssavkar commented Aug 18, 2025

Can please more people share their experiences with upgrade from 8 to 9? What steps to take, things to watch out for and how to fix/remediate?

follow the instructions carefully watch out for apt sources issues, they instructions will leave you with some bookworm entries and likely some duplicates - juts remove them and disable any frr.service restart commands you have tied to to interfaces coming up - which most people here have.....

I am sort of curious, rather than restart commands, I made some fixes on my MS-01 based on something nimro27 commented on back in 11/24, and had the following frr.service.d/dependencies.conf file added. Starting to wonder if this will also create a hang:

[Unit]
Wants=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device
After=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device

@Randymartin1991
Copy link

y frr.service re

Is this no longer needed in proxmox 9, when disabling this under 8.4.11 ceph does not come up automatically after a reboot. So disable the restart scripts, and update?

@DomMintago
Copy link

I'm getting crazy high retries on iperf3 regardless of what I tried:

  • Set smp_affinity
  • cpupower to performance
  • Disabled c-states and ASPM
  • Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

@Randymartin1991
Copy link

I'm getting crazy high retries on iperf3 regardless of what I tried:

* Set smp_affinity

* cpupower to performance

* Disabled c-states and ASPM

* Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

Did you also force the pci speed to gen4?

@DomMintago
Copy link

I'm getting crazy high retries on iperf3 regardless of what I tried:

* Set smp_affinity

* cpupower to performance

* Disabled c-states and ASPM

* Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

Did you also force the pci speed to gen4?

Yep, no difference

@ssavkar
Copy link

ssavkar commented Aug 20, 2025

I'm getting crazy high retries on iperf3 regardless of what I tried:

* Set smp_affinity

* cpupower to performance

* Disabled c-states and ASPM

* Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

Did you also force the pci speed to gen4?

Yep, no difference

Did you make sure for the affinity script it is executable? You may see earlier that was my issue for one of my ms-01 meshes.

@Randymartin1991
Copy link

These are my retries, also a bit high, but the speed is just fine:
The speed test runs on 10.0.1.1, and pings its local interface so this is why it is so high. But should this drop i know the speed is off again. But did not happen anymore since I tweaked the bios settings.

iperf3 Network Speed Report

Test Timestamp: Wed Aug 20 12:17:33 PM CEST 2025

Running iperf3 test against 10.10.10.1...

Connecting to host 10.10.10.1, port 5201
[ 5] local 10.10.10.1 port 53986 connected to 10.10.10.1 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 9.02 GBytes 77.5 Gbits/sec 0 1.69 MBytes
[ 5] 1.00-2.00 sec 9.14 GBytes 78.5 Gbits/sec 0 1.87 MBytes
[ 5] 2.00-3.00 sec 9.27 GBytes 79.6 Gbits/sec 0 2.00 MBytes
[ 5] 3.00-4.00 sec 9.21 GBytes 79.1 Gbits/sec 0 2.31 MBytes
[ 5] 4.00-5.00 sec 8.92 GBytes 76.6 Gbits/sec 0 2.62 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 45.6 GBytes 78.3 Gbits/sec 0 sender
[ 5] 0.00-5.00 sec 45.6 GBytes 78.3 Gbits/sec receiver

iperf Done.

Host: 10.10.10.1
Speed: 78.3 Gbits/sec
Status: ✅ PASS

Running iperf3 test against 10.10.10.2...

Connecting to host 10.10.10.2, port 5201
[ 5] local 10.10.10.1 port 41306 connected to 10.10.10.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.16 GBytes 18.5 Gbits/sec 141 3.37 MBytes
[ 5] 1.00-2.00 sec 2.50 GBytes 21.5 Gbits/sec 247 3.56 MBytes
[ 5] 2.00-3.00 sec 2.10 GBytes 18.0 Gbits/sec 116 3.31 MBytes
[ 5] 3.00-4.00 sec 2.64 GBytes 22.6 Gbits/sec 273 2.37 MBytes
[ 5] 4.00-5.00 sec 2.34 GBytes 20.1 Gbits/sec 151 3.62 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 11.7 GBytes 20.2 Gbits/sec 928 sender
[ 5] 0.00-5.00 sec 11.7 GBytes 20.1 Gbits/sec receiver

iperf Done.

Host: 10.10.10.2
Speed: 20.2 Gbits/sec
Status: ✅ PASS

Running iperf3 test against 10.10.10.3...

Connecting to host 10.10.10.3, port 5201
[ 5] local 10.10.10.1 port 53048 connected to 10.10.10.3 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 2.95 GBytes 25.3 Gbits/sec 82 3.62 MBytes
[ 5] 1.00-2.00 sec 2.93 GBytes 25.2 Gbits/sec 35 4.37 MBytes
[ 5] 2.00-3.00 sec 2.99 GBytes 25.6 Gbits/sec 96 2.31 MBytes
[ 5] 3.00-4.00 sec 2.92 GBytes 25.0 Gbits/sec 181 1.56 MBytes
[ 5] 4.00-5.00 sec 2.24 GBytes 19.2 Gbits/sec 203 3.31 MBytes


[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 14.0 GBytes 24.1 Gbits/sec 597 sender
[ 5] 0.00-5.00 sec 14.0 GBytes 24.1 Gbits/sec receiver

iperf Done.

Host: 10.10.10.3
Speed: 24.1 Gbits/sec
Status: ✅ PASS

@DamianRyse
Copy link

What I noticed in regards of transfer speed and retries is, Turbo Mode must be turned on in order to get decent results. Although, Turbo Mode only affects the receiving device.

iperf3 test WITHOUT turbo mode

[  5] local 10.0.0.1 port 40056 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   682 MBytes  5.71 Gbits/sec  244   1.31 MBytes
[  5]   1.00-2.00   sec  1.09 GBytes  9.39 Gbits/sec  408   1.31 MBytes
[  5]   2.00-3.00   sec  1.85 GBytes  15.9 Gbits/sec  577   2.19 MBytes
[  5]   3.00-4.00   sec  1.17 GBytes  10.0 Gbits/sec  446   1.25 MBytes
[  5]   4.00-5.00   sec   985 MBytes  8.26 Gbits/sec  379   1.31 MBytes
[  5]   5.00-6.00   sec  1.03 GBytes  8.80 Gbits/sec  323   1.31 MBytes
[  5]   6.00-7.00   sec  1.08 GBytes  9.29 Gbits/sec  402   1.44 MBytes
[  5]   7.00-8.00   sec  1.23 GBytes  10.6 Gbits/sec  444   1.12 MBytes
[  5]   8.00-9.00   sec  1.65 GBytes  14.2 Gbits/sec  550   1.06 MBytes
[  5]   9.00-10.00  sec  1.46 GBytes  12.5 Gbits/sec  457    959 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  12.2 GBytes  10.5 Gbits/sec  4230            sender
[  5]   0.00-10.00  sec  12.2 GBytes  10.5 Gbits/sec                  receiver

iperf3 test WITH turbo mode enabled

[  5] local 10.0.0.1 port 39930 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  2.96 GBytes  25.4 Gbits/sec   78   3.31 MBytes
[  5]   1.00-2.00   sec  3.06 GBytes  26.3 Gbits/sec   15   3.31 MBytes
[  5]   2.00-3.00   sec  3.06 GBytes  26.3 Gbits/sec   16   3.31 MBytes
[  5]   3.00-4.00   sec  3.06 GBytes  26.3 Gbits/sec   14   3.31 MBytes
[  5]   4.00-5.00   sec  3.03 GBytes  26.1 Gbits/sec   20   3.31 MBytes
[  5]   5.00-6.00   sec  3.05 GBytes  26.2 Gbits/sec   13   3.31 MBytes
[  5]   6.00-7.00   sec  3.05 GBytes  26.2 Gbits/sec   19   3.81 MBytes
[  5]   7.00-8.00   sec  2.38 GBytes  20.4 Gbits/sec   20   3.50 MBytes
[  5]   8.00-9.00   sec  3.06 GBytes  26.3 Gbits/sec   28   3.50 MBytes
[  5]   9.00-10.00  sec  3.07 GBytes  26.3 Gbits/sec   14   3.50 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  29.8 GBytes  25.6 Gbits/sec  237            sender
[  5]   0.00-10.00  sec  29.8 GBytes  25.6 Gbits/sec                  receiver

The reason for this (as far as I figured out) is that the CPU cannot process the incoming data fast enough without Turbo Mode and the ksoftirqd interrupts are getting higher and higher. In a process monitor like top we can see, that the ksoftirqd process takes up to 99% CPU which then results in massive packet drops/retries.

Unfortunately, in my setup the power consumption increases by up to 100W (worst case scenario) when Turbo Mode is enabled on both of my MS-01.

@DomMintago
Copy link

I'm getting crazy high retries on iperf3 regardless of what I tried:

* Set smp_affinity

* cpupower to performance

* Disabled c-states and ASPM

* Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

Did you also force the pci speed to gen4?

Yep, no difference

Did you make sure for the affinity script it is executable? You may see earlier that was my issue for one of my ms-01 meshes.

Yep I did. smp_affinity_list look all correct too.

@DamianRyse is that just with turbo boost on in bios?

@ssavkar
Copy link

ssavkar commented Aug 20, 2025

y frr.service re

Is this no longer needed in proxmox 9, when disabling this under 8.4.11 ceph does not come up automatically after a reboot. So disable the restart scripts, and update?

At least for me commenting it out seems fine. I will see how things go but just upgraded all nodes to ProxMox 9 and actually went without a hitch. No issues (as far as I can tell) whatsoever. Did move all my VMs to continue running on a separate test machine I had and have yet to move them back, but hm. Shockingly smooth.

@DamianRyse
Copy link

I'm getting crazy high retries on iperf3 regardless of what I tried:

* Set smp_affinity

* cpupower to performance

* Disabled c-states and ASPM

* Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

Did you also force the pci speed to gen4?

Yep, no difference

Did you make sure for the affinity script it is executable? You may see earlier that was my issue for one of my ms-01 meshes.

Yep I did. smp_affinity_list look all correct too.

@DamianRyse is that just with turbo boost on in bios?

Hi @DomMintago
Yes, it must be enabled in the UEFI. It can also be enabled/disabled on Linux itself by checking the value of /sys/devices/system/cpu/intel_pstate/no_turbo
0 -> Turbo Mode is enabled
1 -> Turbo Mode is disabled

When Turbo Mode is disabled, it absolutely helps to limit the max througput of your Thunderbolt interfaces to get a stable data transfer rate: (replace "en06" with your interface name)

tc qdisc add dev en06 root tbf rate 10gbit burst 32m latency 400ms

For me, 10gbit was the sweet spot for low to none retries and a almost constant transfer rate of 10 Gbps.

To delete the limit:

tc qdisc del dev en06 root

iperf3 test with Turbo Mode disabled and throughput limited to 10gbit

Connecting to host 10.0.0.2, port 5201
[  5] local 10.0.0.1 port 40840 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.19 GBytes  10.3 Gbits/sec   35   2.26 MBytes
[  5]   1.00-2.00   sec  1.16 GBytes  9.98 Gbits/sec    3   2.26 MBytes
[  5]   2.00-3.00   sec  1.16 GBytes  10.0 Gbits/sec    4   2.26 MBytes
[  5]   3.00-4.00   sec  1.16 GBytes  9.98 Gbits/sec    0   2.26 MBytes
[  5]   4.00-5.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   5.00-6.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   6.00-7.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   7.00-8.00   sec  1.16 GBytes  10.0 Gbits/sec    0   2.26 MBytes
[  5]   8.00-9.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   9.00-10.00  sec  1.16 GBytes  9.98 Gbits/sec    0   2.26 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.7 GBytes  10.0 Gbits/sec   42            sender
[  5]   0.00-10.00  sec  11.7 GBytes  10.0 Gbits/sec                  receiver

@DomMintago
Copy link

I'm getting crazy high retries on iperf3 regardless of what I tried:

* Set smp_affinity

* cpupower to performance

* Disabled c-states and ASPM

* Tried 3 different thunderbolt cables (OWC, CableMatters, Club3D)

I'm using 3x MS-01, any idea what else to try?

Did you also force the pci speed to gen4?

Yep, no difference

Did you make sure for the affinity script it is executable? You may see earlier that was my issue for one of my ms-01 meshes.

Yep I did. smp_affinity_list look all correct too.
@DamianRyse is that just with turbo boost on in bios?

Hi @DomMintago Yes, it must be enabled in the UEFI. It can also be enabled/disabled on Linux itself by checking the value of /sys/devices/system/cpu/intel_pstate/no_turbo 0 -> Turbo Mode is enabled 1 -> Turbo Mode is disabled

When Turbo Mode is disabled, it absolutely helps to limit the max througput of your Thunderbolt interfaces to get a stable data transfer rate: (replace "en06" with your interface name)

tc qdisc add dev en06 root tbf rate 10gbit burst 32m latency 400ms

For me, 10gbit was the sweet spot for low to none retries and a almost constant transfer rate of 10 Gbps.

To delete the limit:

tc qdisc del dev en06 root

iperf3 test with Turbo Mode disabled and throughput limited to 10gbit

Connecting to host 10.0.0.2, port 5201
[  5] local 10.0.0.1 port 40840 connected to 10.0.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.19 GBytes  10.3 Gbits/sec   35   2.26 MBytes
[  5]   1.00-2.00   sec  1.16 GBytes  9.98 Gbits/sec    3   2.26 MBytes
[  5]   2.00-3.00   sec  1.16 GBytes  10.0 Gbits/sec    4   2.26 MBytes
[  5]   3.00-4.00   sec  1.16 GBytes  9.98 Gbits/sec    0   2.26 MBytes
[  5]   4.00-5.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   5.00-6.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   6.00-7.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   7.00-8.00   sec  1.16 GBytes  10.0 Gbits/sec    0   2.26 MBytes
[  5]   8.00-9.00   sec  1.16 GBytes  9.99 Gbits/sec    0   2.26 MBytes
[  5]   9.00-10.00  sec  1.16 GBytes  9.98 Gbits/sec    0   2.26 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.7 GBytes  10.0 Gbits/sec   42            sender
[  5]   0.00-10.00  sec  11.7 GBytes  10.0 Gbits/sec                  receiver

Yeah my turbo is on. Limiting throughput does help, I'm just wondering why my retry is so high with the same setup as others

@DamianRyse
Copy link

@DomMintago have you tried another cable? I got one that is "Intel Thunderbolt certified" or something like that.

@Wyox
Copy link

Wyox commented Aug 21, 2025

I've had similar problems with the Retr and I assume this is due to the fact I had the scaling govenor set to powersave. Performance was fine expect for the high Retr so as suggested above I've also tried

# Ensure turbo is on
echo "0" > /sys/devices/system/cpu/intel_pstate/no_turbo
# Ensure that Turbo speed goes to the max
echo 100 > /sys/devices/system/cpu/intel_pstate/max_perf_pct

# Get you performance cores for the command below
cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus

# Apply IRQ affinity to performance cores
grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo 0-7 | tee "/proc/irq/{}/smp_affinity_list"'

I've went multiple routes after to try and figure out a way to solve the issue without switching the cpu govenor and the script below seems to do the trick for me.

It puts the CPU cores that are used for Thunderbolt IRQ into the highest "performance" preference while keeping others to "balance_power".

# Powersaving measures
echo "powersave" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
echo "balance_power" | tee /sys/devices/system/cpu/cpu*/cpufreq/energy_performance_preference
echo 10 | tee /sys/devices/system/cpu/cpu*/power/energy_perf_bias


# Determine the cores that are used for Thunderbolt IRQ. Afaik each port if used to link nodes uses 2 cores for IRQs, 1 RX and 1 TX.
# So if 2 nodes are connected, 4 P cores should be set to performance

THUNDERBOLT_IRQS=$(grep thunderbolt /proc/interrupts | cut -d ":" -f1)
PCORES=$(cat /sys/devices/cpu_core/cpus)
for irq in $THUNDERBOLT_IRQS; do

    IRQ_ON_CPU=$(cat /proc/irq/$irq/effective_affinity_list)

    # Only apply to the cores that need it to save more power
    IRQ_INTERUPTS_ON_CORE=$(awk -v irq="$irq:" -v cpu="$IRQ_ON_CPU" '$1 == irq {print $(cpu + 2)}' /proc/interrupts)
    # Attempt to filter the right cores after a reboot and iperf to each node
    if [ "$IRQ_INTERUPTS_ON_CORE" -gt 10 ]; then
      echo "Applying performance mode que to fact that there were interrupts on this core $IRQ_INTERUPTS_ON_CORE"
      echo "IRQ: $irq on core $IRQ_ON_CPU"
      cat /proc/irq/$irq/smp_affinity_list

      echo "performance" | tee /sys/devices/system/cpu/cpu$IRQ_ON_CPU/cpufreq/energy_performance_preference
      echo "performance applied to $IRQ_ON_CPU"        
    fi
done

There is still improvements in the script. If the kernel decides to switch the cores used for IRQ due to the affinity change, the performance preference is also applied to the previously used cores. But for me this reduced the Retr by a good amount

Connecting to host 10.0.0.82, port 5201
[  5] local 10.0.0.83 port 55778 connected to 10.0.0.82 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  3.05 GBytes  26.2 Gbits/sec   33   3.25 MBytes
[  5]   1.00-2.00   sec  3.05 GBytes  26.2 Gbits/sec   18   3.25 MBytes
[  5]   2.00-3.00   sec  2.95 GBytes  25.3 Gbits/sec   22   3.25 MBytes
[  5]   3.00-4.00   sec  3.07 GBytes  26.4 Gbits/sec    4   3.25 MBytes
[  5]   4.00-5.00   sec  3.08 GBytes  26.5 Gbits/sec    3   3.25 MBytes
[  5]   5.00-6.00   sec  3.06 GBytes  26.3 Gbits/sec    5   3.25 MBytes
[  5]   6.00-7.00   sec  3.06 GBytes  26.3 Gbits/sec    7   3.25 MBytes
[  5]   7.00-8.00   sec  3.04 GBytes  26.1 Gbits/sec    8   3.25 MBytes
[  5]   8.00-9.00   sec  3.07 GBytes  26.3 Gbits/sec   27   3.18 MBytes
[  5]   9.00-10.00  sec  3.06 GBytes  26.3 Gbits/sec    8   3.18 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  30.5 GBytes  26.2 Gbits/sec  135             sender
[  5]   0.00-10.00  sec  30.5 GBytes  26.2 Gbits/sec                  receiver

@archiebug
Copy link

I manged to upgrade ceph to squid using the proxmox guides. I still get the 26 Gbit/s data rate as well. Is there a quick list of next steps to do to get to 9.0?

  1. run upgrade script checker and make sure there are no errors
  2. remedy the frr.service issues (comment out frr.service restart commands in scrtipts). From the initial setup guide, it appears that it lo and en0x under the if-up.d folder. Is there an example to what those should look like now?
  3. follow proxmox guide to upgrade to 9.0

@ssavkar
Copy link

ssavkar commented Aug 23, 2025

I manged to upgrade ceph to squid using the proxmox guides. I still get the 26 Gbit/s data rate as well. Is there a quick list of next steps to do to get to 9.0?

  1. run upgrade script checker and make sure there are no errors
  2. remedy the frr.service issues (comment out frr.service restart commands in scrtipts). From the initial setup guide, it appears that it lo and en0x under the if-up.d folder. Is there an example to what those should look like now?
  3. follow proxmox guide to upgrade to 9.0

So everyone is a little different, for instance I did not make the changes in the if-up.d folder but instead had a dependencies.conf file in my services part of systems which essentially had the same functionality that I fully commented out. After that, with respect to each node, I very carefully followed the directions that proxmox has for the 8 to 9 update and kept double checking pve8to9 to see what warnings or failures arose to keep things all clean.

One thing I did that some have not is besides commenting out the frr files I had used to ensure they came up at the right time, I also (as was the case when you upgraded to squid) on each node set the noout flag to ensure no changes to the node in teh midst of the update.

So “ceph osd set noout”. I then after reboot unset the noout flag.

Only other thing I’d note is that at the end of the process on each node besides cleaning up apt/sources, I also had to remove some old system files and install a new one as suggested by the pve8to9 script which again noted some stale files that needed to still be taken care of.

I didn’t find the need to do much else so long as you really do follow the online instructions ProxMox has put together for the 8 to 9 upgrade.

I still have one node that isn’t part of a cluster which runs my opensense instance and I haven’t yet figured how to deal with that one. Since I can’t really move opensense off of it, at least not at the moment. And of course in the upgrade I’ll lose access to the internet if I tried to leave it as is. But may just deal with that at a later date, not like a real issue sticking with 8 for that one node.

*** oh and to your question as to what they should look like post install, I don’t think you have the same issues we all had with 8 so I have not had to modify anything further for frr to come up ok now.

@Hindin81
Copy link

Hindin81 commented Sep 5, 2025

I have ordered now 3 x MS-01. Are there any recommendations on how to proceed in this case? Install Proxmox 9 right away and then try to get the Thunderbolt network up and running using the instructions here. I also saw that the latest BIOS version for the MS-01 is 1.27. Or would it be better to install Proxmox 8 and then migrate later?

@Rgamer84
Copy link

Rgamer84 commented Sep 5, 2025

If this is your first go at this, and you are okay with not using ipv6, I'd recommend getting bios updated first, install Proxmox 9 and implement the tslabs-net gist (https://gist.github.com/taslabs-net/9da77d302adb9fc3f10942d81f700a05). It works out of the box and hasn't created any headaches that I've seen so far. If you apply this one, you will have to modify things listed above to make it work properly and if followed to the T, you might even break the OS from even booting. I don't want to steer anyone away from this gist but for Proxmox 9 at this time, the other gist will be more straightforward. If you do want to go with scyto's documentation for TB4, who by the way has done an amazing job, I'd recommend reading the comments above and sorting out how to modify it so it will work properly in your environment.

@ssavkar
Copy link

ssavkar commented Sep 15, 2025

So a really interesting issue that arose for me on my updated proxmox 9 mesh system (three MS-01s) is that all of a sudden I started having issues on reboot of nodes randomly needing me to unplug/replug in thunderbolt cables to get things up and running. Drove me nuts.

I realized my two /usr/local/bin/pve0*.sh scripts were basic ifup scripts that had worked without issues with proxmox 8. But now with the updates, all of a sudden I really had to update them to the script @corvy came up with and that is in this gist now. I can't recall when I set things up but now that I have updated to these new .sh files, I am up and running again fine on reboots.

So something to definitely be careful about going forward!

@contributorr
Copy link

So a really interesting issue that arose for me on my updated proxmox 9 mesh system (three MS-01s) is that all of a sudden I started having issues on reboot of nodes randomly needing me to unplug/replug in thunderbolt cables to get things up and running. Drove me nuts.

I realized my two /usr/local/bin/pve0*.sh scripts were basic ifup scripts that had worked without issues with proxmox 8. But now with the updates, all of a sudden I really had to update them to the script @corvy came up with and that is in this gist now. I can't recall when I set things up but now that I have updated to these new .sh files, I am up and running again fine on reboots.

So something to definitely be careful about going forward!

This happened to me while patching proxmox nodes a couple of days ago - I needed to replug all TB cables to get all connections UP. BUT, when I was just doing reboot after another small patching later (kernel too), I didn't need to replug TB cables, so not sure what's wrong. Any idea how did you fix that?

Thanks.

@ssavkar
Copy link

ssavkar commented Sep 16, 2025

So a really interesting issue that arose for me on my updated proxmox 9 mesh system (three MS-01s) is that all of a sudden I started having issues on reboot of nodes randomly needing me to unplug/replug in thunderbolt cables to get things up and running. Drove me nuts.
I realized my two /usr/local/bin/pve0*.sh scripts were basic ifup scripts that had worked without issues with proxmox 8. But now with the updates, all of a sudden I really had to update them to the script @corvy came up with and that is in this gist now. I can't recall when I set things up but now that I have updated to these new .sh files, I am up and running again fine on reboots.
So something to definitely be careful about going forward!

This happened to me while patching proxmox nodes a couple of days ago - I needed to replug all TB cables to get all connections UP. BUT, when I was just doing reboot after another small patching later (kernel too), I didn't need to replug TB cables, so not sure what's wrong. Any idea how did you fix that?

Thanks.

I went back to the Scyto instructions and I saw that the two scripts in /usr/local/bin/pve0*.sh were older ones that didn't have repeat attempts to set interfaces up after reboot or cable connections. It was just a simple up command in each file, and I think that was the failure. Now that I updated to the "newer" scripts in this updated gist (see under https://gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570#set-interfaces-to-up-on-reboots-and-cable-insertions), everything appears at least at first glance to be fine and the debug results in /tmp show me that indeed randomly there have been a few cases where there were repeats of the interface up command to get one or the other interface up and running where before I only had the one attempt.

So seems to point to that as the culprit, albeit I will continue to monitor as I reboot nodes in the future. What made it bad for me is I was actually on a VPN at my work when I rebooted the system at home, so had to scramble back early to get things up and running again before the family came home and would start screaming, since I run OPNsense virtually on the mesh.

@corvy
Copy link

corvy commented Sep 21, 2025

I also can update on this. I have completed my upgrade to version 9 successfully. Not moved to the SDN setup from Proxmox (yet) - still keeping the frr setup from this gist for now. Make sure you use the new upgrade scripts I made earlier: https://gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570#set-interfaces-to-up-on-reboots-and-cable-insertions

Run pve8to9 --full and fix EVERYTING :) Repeat during the process.

After that I did the following:
Make sure ceph is healthy and upgraded to needed version. Set noout (clusterwide, just on the first node).

ceph -s
ceph osd set noout

Config updates
Update sysctl.conf to /etc/sysctl.d/. Basically I just moved the /etc/sysctl.conf file to /etc/sysctl.d/99-sysctl.conf. These are the two needed lines in 99-sysctl.conf:

# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1

# Uncomment the next line to enable packet forwarding for IPv6
net.ipv6.conf.all.forwarding=1

Comment out the following

/etc/systemd/system/frr.service.d/dependencies.conf

#[Unit]
#Wants=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device
#After=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device

Make sure no frr mentions here:

# /etc/network/interfaces.d/thunderbolt

auto en05
allow-hotplug en05
iface en05 inet manual
    pre-up ip link set $IFACE up
    mtu 65520

auto en06
allow-hotplug en06
iface en06 inet manual
    pre-up ip link set $IFACE up
    mtu 65520

# Loopback for Ceph MON / FRR router-id
auto lo
iface lo inet loopback
    up ip addr add <REPLACE_WITH_IPV4TBIP>/32 dev lo
    up ip -6 addr add <REPLACE_WITH_IPV6TBIP>/128 dev lo

Make sure you change the two last lines above, the IPs are the IP you use on the TB networking.

Thunderbolt udev rules:

/etc/udev/rules.d/10-tb-en.rules

ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"

Cordon node
Move all VMs and containers off the node before starting the upgrade. Just use migrate in the GUI. Move them around and the do the upgrades in order.

Reminder: Run pve8to9 --full and fix EVERYTING :)

Upgrade sources (see the upgrade doc for details). Also I remove all the .list sources. Upgrade more if needed - pay attention to getting this part right.:

debian.sources

Types: deb
URIs: http://deb.debian.org/debian/
Suites: trixie trixie-updates
Components: main contrib non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

Types: deb
URIs: http://security.debian.org/debian-security/
Suites: trixie-security
Components: main contrib non-free-firmware
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

proxmox.sources

Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg

ceph.sources

Types: deb
URIs: http://download.proxmox.com/debian/ceph-squid
Suites: trixie
Components: no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
apt update
apt dist-upgrade

WARNING
I made sure after each node was upgraded that the thunderbolt networking worked well, that it came up by itself on reboots. Make 100% sure the upgraded node is in fully working order before moving to the next node. Test reboots a few times and make absolutely sure!

After all nodes are completed:

ceph osd unset noout

Following this process I had zero issues upgrading my 3 node cluster. Hope this can help someone.

@maveice
Copy link

maveice commented Sep 24, 2025

After I upgraded to PROXMOX v9.0.10 my both tb4 interfaces en05 and en06 do not show up any more (on 2 of my 3 Cluster Nodes) since yesterday - I didn't change anything in the configuation since then (above changes had and have been impemented and were working well previously, also in previous kernel versions of PROXMOX Version 9).
2 days ago the tb4 ports did not appear any more, so my ceph-storage was not reachable any more on this node.

The node is a "MS-01" (Minisforum) running latest BIOS 1.27

Did you ever experience this situation and please let me know your ideas of how I can overcome this.
Thanks a lot!

PS: "lspci" shows:
00:07.0 PCI bridge: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port #0
00:07.2 PCI bridge: Intel Corporation Raptor Lake-P Thunderbolt 4 PCI Express Root Port #2
00:0d.0 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 USB Controller
00:0d.2 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI #0
00:0d.3 USB controller: Intel Corporation Raptor Lake-P Thunderbolt 4 NHI #1

@corvy
Copy link

corvy commented Sep 24, 2025

@maveice, does the usr/local/bin/pve-en0*.sh scripts run? Check the /tmp/udev-debug.log. If nothing try to run the scripts manually.

@ssavkar
Copy link

ssavkar commented Sep 24, 2025

@maveice, does the usr/local/bin/pve-en0*.sh scripts run? Check the /tmp/udev-debug.log. If nothing try to run the scripts manually.

Right definitely start there. Also if there is nothing in the logs try to unplug and plug back in a port and see if that separately triggers anything. I admit this weekend I had a weird issue in another cluster I had and this was the only way for now I found to bring back up TB from a full shutdown. Haven’t had time to look at it further at the moment.

@corvy
Copy link

corvy commented Sep 24, 2025

My experience is that the following parts are key to make this work.

  1. Comment out the rules for frr here: /etc/systemd/system/frr.service.d/dependencies.conf
#[Unit]
#Wants=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device
#After=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device
  1. Check the udev rules here: /etc/udev/rules.d/10-tb-en.rules - doublecheck the ACTION part.
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
  1. Make sure you have the correct scripts installed, and they have execute permissions etc (/usr/local/bin/pve-en0*.sh)

Other than that maybe the rename part of the gist, make sure the interfaces are indeed named en05 and en06. https://gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570#rename-thunderbolt-connections

@ilbarone87
Copy link

ilbarone87 commented Sep 24, 2025

  1. Comment out the rules for frr here: /etc/systemd/system/frr.service.d/dependencies.conf
#[Unit]
#Wants=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device
#After=sys-subsystem-net-devices-en05.device sys-subsystem-net-devices-en06.device

Weird enough i don’t have any …/frr.service.d/dependencies.conf. Is this a ceph conf file perhaps?

@corvy
Copy link

corvy commented Sep 24, 2025

If it is not there, that is fine. The problem would only be if it was there, and not commented out.

Weird enough i don’t have any …/frr.service.d/dependencies.conf. Is this a ceph conf file perhaps?

No this has nothing to do with CEPH, this is just the routing SDN fabric for the network, that CEPH is using.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment