this gist is part of this series
- add
thunderbolt
andthunderbolt-net
kernel modules (this must be done all nodes - yes i know it can sometimes work withoutm but the thuderbolt-net one has interesting behaviou' so do as i say - add both ;-)nano /etc/modules
add modules at bottom of file, one on each line- save using
x
theny
thenenter
doing this means we don't have to give each thunderbolt a manual IPv6 addrees and that these addresses stay constant no matter what
Add the following to each node using nano /etc/network/interfaces
If you see any sections called thunderbolt0 or thunderbol1 delete them at this point.
Doing this means we don't have to give each thunderbolt a manual IPv6 or IPv4 addrees and that these addresses stay constant no matter what.
Add the following to each node using nano /etc/network/interfaces
this to remind you not to edit en05 and en06 in the GUI
This fragment should go between the existing auto lo
section and adapater sections.
iface en05 inet manual
#do not edit it GUI
iface en06 inet manual
#do not edit in GUI
If you see any thunderbol sections delete them from the file before you save it.
*DO NOT DELETE the source /etc/network/interfaces.d/*
this will always exist on the latest versions and should be the last or next to last line in /interfaces file
This is needed as proxmox doesn't recognize the thunderbolt interface name. There are various methods to do this. This method was selected after trial and error because:
- the thunderboltX naming is not fixed to a port (it seems to be based on sequence you plug the cables in)
- the MAC address of the interfaces changes with most cable insertion and removale events
-
use
udevadm monitor
command to find your device IDs when you insert and remove each TB4 cable. Yes you can use other ways to do this, i recommend this one as it is great way to understand what udev does - the command proved more useful to me thanthe syslog
orlspci command
for troublehsooting thunderbolt issues and behavious. In my case my two pci paths are0000:00:0d.2
and0000:00:0d.3
if you bought the same hardware this will be the same on all 3 units. Don't assume your PCI device paths will be the same as mine. -
create a link file using
nano /etc/systemd/network/00-thunderbolt0.link
and enter the following content:
[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05
- create a second link file using
nano /etc/systemd/network/00-thunderbolt1.link
and enter the following content:
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06
This section en sure that the interfaces will be brought up at boot or cable insertion with whatever settings are in /etc/network/interfaces - this shouldn't need to be done, it seems like a bug in the way thunderbolt networking is handled (i assume this is debian wide but haven't checked).
Huge thanks to @corvy for figuring out a script that should make this much much more reliable for most
- create a udev rule to detect for cable insertion using
nano /etc/udev/rules.d/10-tb-en.rules
with the following content:
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
-
save the file
-
create the first script referenced above using
nano /usr/local/bin/pve-en05.sh
and with the follwing content:
#!/bin/bash
LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en05"
echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"
# If multiple interfaces go up at the same time,
# retry 10 times and break the retry when successful
for i in {1..10}; do
echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
/usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
break
}
echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
sleep 3
done
save the file and then
- create the second script referenced above using
nano /usr/local/bin/pve-en06.sh
and with the follwing content:
#!/bin/bash
LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en06"
echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"
# If multiple interfaces go up at the same time,
# retry 10 times and break the retry when successful
for i in {1..10}; do
echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
/usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
break
}
echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
sleep 3
done
and save the file
- make both scripts executable with
chmod +x /usr/local/bin/*.sh
- run
update-initramfs -u -k all
to propogate the new link files into initramfs - Reboot (restarting networking, init 1 and init 3 are not good enough, so reboot)
##3 Install LLDP - this is great to see what nodes can see which.
- install lldpctl with
apt install lldpd
on all 3 nodes - execute
lldpctl
you should info
if you are having speed issues make sure the following is set on the kernel command line in /etc/default/grub
file
intel_iommu=on iommu=pt
one set be sure to run update-grub
and reboot
everyones grub command line is different this is mine because i also have i915 virtualization, if you get this wrong you can break your machine, if you are not doing that you don't need the i915 entries you see below
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
(note if you have more things in your cmd line DO NOT REMOVE them, just add the two intel ones, doesnt matter where.
cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
you should get two lines on an intel system with P and E cores. first line should be your P cores second line should be your E cores
for example on mine:
root@pve1:/etc/pve# cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
0-7
8-15
- make a file at
/etc/network/if-up.d/thunderbolt-affinity
- add the following to it - make sure to replace
echo X-Y
with whatever the report told you were your performance cores - e.g.echo 0-7
#!/bin/bash
# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
# Set Thunderbot affinity to Pcores
grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo X-Y | tee "/proc/irq/{}/smp_affinity_list"'
fi
- save the file - done
I have only tried this on 6.8 kernels, so YMMV If you want more TB messages in dmesg to see why connection might be failing here is how to turn on dynamic tracing
For bootime you will need to add it to the kernel command line by adding thunderbolt.dyndbg=+p
to your /etc/default/grub file, running update-grub
and rebooting.
To expand the example above"
`GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt thunderbolt.dyndbg=+p"`
Don't forget to run update-grub
after saving the change to the grub file.
For runtime debug you can run the following command (it will revert on next boot) so this cant be used to cpature what happens at boot time.
`echo -n 'module thunderbolt =p' > /sys/kernel/debug/dynamic_debug/control`
these tools can be used to inspect your thundebolt system, note they rely on rust to be installedm you must use the rustup script below and not intsall rust by package manager at this time (9/15/24)
apt install pkg-config libudev-dev git curl
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/intel/tbtools
restart you ssh session
cd tbtools
cargo install --path .
spamming another update. while i haven't had a lockup of my vm after the last changes i made, i'm still looking to improve the retries and these ceph aio_submit retry messages in my system log.
I've been doing a bunch of testing since i still think it stems from packet loss. I've found no appreciable difference messing with kernel level settings for tcp window size, net.core.rmem_max and wmem_max and a few other kernel level settings. In fact i often made things worse. I also tried to disable offloading on the thunderbolt interfaces, it made performance worse, but i didn't methodically try different offloading combinations, i have seen some improvements on physical nic's with disabling only specific offloading parameters.
At this point i'm thinking either there's some kind of issue with flow control not working right, or the thunderbolt controller just can't keep up and is dropping/corrupting data when it's loaded bidirectionally. Why do i say corrupting? because looking at interface stats using ip link i'm also seeing crc errors. I see crc errors on all of my nodes, i'm using certified owc tb4 cables and i even tried an expensive active apple thunderbolt 4 cable, which rules out bad cables.
I decided to mess around with the queueing discipline (qdisc) first thinking it might be a flow control issue. On my machine the thunderbolt interfaces default to a qdisc of pfifo_fast in my testing this has the highest retries. I found arguably less retries with pfifo. Enough of an improvement with fq to say it's not within the range of error, and a significant improvement with fq_codel. I found on average a 60-70% reduction in retries with fq_codel and with "iperf3 --bidir" a bidirectional 25-26gbps. I was still getting some packet drops on the interfaces, but as long as the application layer wasn't getting pissed off I'm not sure i care that much.
I wanted to take it a step further now since with ceph both en05 and en06 could be loaded concurrently. So i ran 2 x iperf3 servers on a node on different ports, for example say PVE3. Then i ran a bidirectional iperf3's from both PVE2 and PVE4 at the same time to PVE3. The idea being to try and load both thunderbolt ports on PVE3 to see what happens. I immediately saw significantly reduced performance and increased retries. When i was running the iperf3 on a single machine i was seeing 25-26gbps both ways, but when both machines were hitting PVE3 throughput dropped off, was kind of asymetric for example i saw something like 14gbps/18gbps. I verified this with different nodes running the server each time.
Now i remember a post earlier from @razqqm using tc qdisc's to rate limit, so i tried a few rate limiting qdisc's. I tried "cake" and "tbf w/ fq_codel", i didn't try hbt as @razqqm used. Cake is much easier to configure, but I thought maybe tbf w/ fq_codel might perform better, since fq_codel performed better on it's own. I experimented with both of them at different bandwidth limits, on my 13900h's 15gbps seemed to be about the sweet spot when loading en05 and en06 at the same time and seeing minimal retries. I didn't see any significant difference in performance between the two of them, so i implemented cake. I still get retries and packet loss if both interfaces are loaded, but significantly less. Also significantly less packet loss and crc errors in production.
In production I'm still getting some of the ceph aio_submit retry messages in my system logs; however both are significantly reduced. I'm hopeful I can resolve these damn lockups, especially since i'm going on vacation in a week. I'm still trying to isolate a few more possible causes. But i'm hopeful other may find my multi-post novel here helpful.
to set the qdisc on boot
create a file in /etc/network/if-up.d/, i called mine set-qdiscvi /etc/network/if-up.d/set-qdisc