this gist is part of this series
- add
thunderbolt
andthunderbolt-net
kernel modules (this must be done all nodes - yes i know it can sometimes work withoutm but the thuderbolt-net one has interesting behaviou' so do as i say - add both ;-)nano /etc/modules
add modules at bottom of file, one on each line- save using
x
theny
thenenter
doing this means we don't have to give each thunderbolt a manual IPv6 addrees and that these addresses stay constant no matter what
Add the following to each node using nano /etc/network/interfaces
If you see any sections called thunderbolt0 or thunderbol1 delete them at this point.
Doing this means we don't have to give each thunderbolt a manual IPv6 or IPv4 addrees and that these addresses stay constant no matter what.
Add the following to each node using nano /etc/network/interfaces
this to remind you not to edit en05 and en06 in the GUI
This fragment should go between the existing auto lo
section and adapater sections.
iface en05 inet manual
#do not edit it GUI
iface en06 inet manual
#do not edit in GUI
If you see any thunderbol sections delete them from the file before you save it.
*DO NOT DELETE the source /etc/network/interfaces.d/*
this will always exist on the latest versions and should be the last or next to last line in /interfaces file
This is needed as proxmox doesn't recognize the thunderbolt interface name. There are various methods to do this. This method was selected after trial and error because:
- the thunderboltX naming is not fixed to a port (it seems to be based on sequence you plug the cables in)
- the MAC address of the interfaces changes with most cable insertion and removale events
-
use
udevadm monitor
command to find your device IDs when you insert and remove each TB4 cable. Yes you can use other ways to do this, i recommend this one as it is great way to understand what udev does - the command proved more useful to me thanthe syslog
orlspci command
for troublehsooting thunderbolt issues and behavious. In my case my two pci paths are0000:00:0d.2
and0000:00:0d.3
if you bought the same hardware this will be the same on all 3 units. Don't assume your PCI device paths will be the same as mine. -
create a link file using
nano /etc/systemd/network/00-thunderbolt0.link
and enter the following content:
[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05
- create a second link file using
nano /etc/systemd/network/00-thunderbolt1.link
and enter the following content:
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06
This section en sure that the interfaces will be brought up at boot or cable insertion with whatever settings are in /etc/network/interfaces - this shouldn't need to be done, it seems like a bug in the way thunderbolt networking is handled (i assume this is debian wide but haven't checked).
Huge thanks to @corvy for figuring out a script that should make this much much more reliable for most
- create a udev rule to detect for cable insertion using
nano /etc/udev/rules.d/10-tb-en.rules
with the following content:
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
-
save the file
-
create the first script referenced above using
nano /usr/local/bin/pve-en05.sh
and with the follwing content:
#!/bin/bash
LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en05"
echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"
# If multiple interfaces go up at the same time,
# retry 10 times and break the retry when successful
for i in {1..10}; do
echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
/usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
break
}
echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
sleep 3
done
save the file and then
- create the second script referenced above using
nano /usr/local/bin/pve-en06.sh
and with the follwing content:
#!/bin/bash
LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en06"
echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"
# If multiple interfaces go up at the same time,
# retry 10 times and break the retry when successful
for i in {1..10}; do
echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
/usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
break
}
echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
sleep 3
done
and save the file
- make both scripts executable with
chmod +x /usr/local/bin/*.sh
- run
update-initramfs -u -k all
to propogate the new link files into initramfs - Reboot (restarting networking, init 1 and init 3 are not good enough, so reboot)
##3 Install LLDP - this is great to see what nodes can see which.
- install lldpctl with
apt install lldpd
on all 3 nodes - execute
lldpctl
you should info
if you are having speed issues make sure the following is set on the kernel command line in /etc/default/grub
file
intel_iommu=on iommu=pt
one set be sure to run update-grub
and reboot
everyones grub command line is different this is mine because i also have i915 virtualization, if you get this wrong you can break your machine, if you are not doing that you don't need the i915 entries you see below
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
(note if you have more things in your cmd line DO NOT REMOVE them, just add the two intel ones, doesnt matter where.
cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
you should get two lines on an intel system with P and E cores. first line should be your P cores second line should be your E cores
for example on mine:
root@pve1:/etc/pve# cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
0-7
8-15
- make a file at
/etc/network/if-up.d/thunderbolt-affinity
- add the following to it - make sure to replace
echo X-Y
with whatever the report told you were your performance cores - e.g.echo 0-7
#!/bin/bash
# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
# Set Thunderbot affinity to Pcores
grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo X-Y | tee "/proc/irq/{}/smp_affinity_list"'
fi
- save the file - done
I have only tried this on 6.8 kernels, so YMMV If you want more TB messages in dmesg to see why connection might be failing here is how to turn on dynamic tracing
For bootime you will need to add it to the kernel command line by adding thunderbolt.dyndbg=+p
to your /etc/default/grub file, running update-grub
and rebooting.
To expand the example above"
`GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt thunderbolt.dyndbg=+p"`
Don't forget to run update-grub
after saving the change to the grub file.
For runtime debug you can run the following command (it will revert on next boot) so this cant be used to cpature what happens at boot time.
`echo -n 'module thunderbolt =p' > /sys/kernel/debug/dynamic_debug/control`
these tools can be used to inspect your thundebolt system, note they rely on rust to be installedm you must use the rustup script below and not intsall rust by package manager at this time (9/15/24)
apt install pkg-config libudev-dev git curl
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/intel/tbtools
restart you ssh session
cd tbtools
cargo install --path .
So everyone is a little different, for instance I did not make the changes in the if-up.d folder but instead had a dependencies.conf file in my services part of systems which essentially had the same functionality that I fully commented out. After that, with respect to each node, I very carefully followed the directions that proxmox has for the 8 to 9 update and kept double checking pve8to9 to see what warnings or failures arose to keep things all clean.
One thing I did that some have not is besides commenting out the frr files I had used to ensure they came up at the right time, I also (as was the case when you upgraded to squid) on each node set the noout flag to ensure no changes to the node in teh midst of the update.
So “ceph osd set noout”. I then after reboot unset the noout flag.
Only other thing I’d note is that at the end of the process on each node besides cleaning up apt/sources, I also had to remove some old system files and install a new one as suggested by the pve8to9 script which again noted some stale files that needed to still be taken care of.
I didn’t find the need to do much else so long as you really do follow the online instructions ProxMox has put together for the 8 to 9 upgrade.
I still have one node that isn’t part of a cluster which runs my opensense instance and I haven’t yet figured how to deal with that one. Since I can’t really move opensense off of it, at least not at the moment. And of course in the upgrade I’ll lose access to the internet if I tried to leave it as is. But may just deal with that at a later date, not like a real issue sticking with 8 for that one node.
*** oh and to your question as to what they should look like post install, I don’t think you have the same issues we all had with 8 so I have not had to modify anything further for frr to come up ok now.