This guide documents the steps to enable and configure Thunderbolt networking on your Linux system (e.g., Ubuntu, Proxmox). It covers loading the necessary kernel modules, ensuring persistent interface naming, configuring a fixed IP address, and testing throughput with iperf3. Note that Thunderbolt 3/4 hardware advertises a raw bandwidth of 40 Gbps, but practical throughput is typically lower due to half‑duplex operation, protocol overhead, and system constraints.
This guide was developed and tested with the following hardware:
- CPU: AMD Ryzen 9 PRO 6950H
- 8 cores / 16 threads
- 3.3 GHz base frequency / 4.94 GHz boost
- Zen 3+ architecture (Family 25, Model 68)
- 6nm process technology
- Thunderbolt/USB4 Interface: Dual USB4 ports with 40 Gbps bandwidth
- Mini PC Model: GMKtec M7 Pro Mini PC
- Operating System: Proxmox (Debian-based)
The GMKtec M7 Pro features both USB4 and OCuLink connectivity:
- Dual USB4 ports supporting 40 Gbps transfer speeds
- USB4/Thunderbolt compatibility for high-speed data transfer
- Support for external devices including displays and storage
- 8K@60Hz video output capability through USB4 ports
- Dedicated OCuLink port specifically designed for external GPU (eGPU) connectivity
- PCIe x4 speeds compared to Thunderbolt's x3, providing greater bandwidth
- Lower latency than traditional Thunderbolt connections for graphics-intensive applications
- Better frame rates and improved performance for gaming and compute workloads
- Preferred connection for modern eGPU enclosures when maximum performance is required
The combination of USB4 and OCuLink makes this system particularly well-suited for both Thunderbolt networking (via USB4) and external GPU connectivity (via OCuLink), giving you flexibility for different high-bandwidth applications.
- A Linux system with Thunderbolt/USB4 hardware.
- A Thunderbolt cable connecting two machines.
- Sudo privileges.
- Kernel support for Thunderbolt and the
thunderbolt_net
module. - A system using
/etc/network/interfaces
(e.g., Proxmox) rather than Netplan.
-
Update Package Lists and Install Bolt:
sudo apt update sudo apt install bolt
The
bolt
utility helps manage Thunderbolt device security and authorization.
IOMMU support is crucial for Thunderbolt networking security and performance.
-
Edit the GRUB Configuration:
sudo vi /etc/default/grub
-
Modify the GRUB_CMDLINE_LINUX_DEFAULT Line:
For AMD Ryzen processors (like the Ryzen 9 PRO 6950H):
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
For Intel processors:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
-
Update GRUB:
sudo update-grub
-
Verify IOMMU is Enabled After Reboot:
After rebooting, verify IOMMU is properly enabled:
dmesg | grep -i iommu
For AMD systems, you should see output containing "AMD-Vi" references.
-
Reboot Your System:
sudo reboot
If you're using an AMD Ryzen processor like the Ryzen 9 PRO 6950H:
-
Thunderbolt Controller Implementation:
- AMD systems typically implement Thunderbolt via discrete controllers rather than integrated into the chipset.
- Ensure your motherboard or laptop has a Thunderbolt controller (often an Alpine Ridge, Titan Ridge, or Maple Ridge controller).
-
BIOS/UEFI Settings:
- Check your BIOS/UEFI settings to confirm Thunderbolt support is enabled.
- Some AMD systems require explicitly enabling Thunderbolt in the BIOS.
- Look for settings related to "Thunderbolt", "USB4", or "PCIe tunneling".
-
PCIe Allocation:
- On some AMD systems, enabling Thunderbolt may reduce PCIe lanes available for other devices.
- If experiencing performance issues, verify PCIe lane allocation in your BIOS.
-
Recent Firmware Updates:
- AMD Thunderbolt support has improved with firmware updates. Ensure your system BIOS is up to date.
-
Connect the Thunderbolt Cable:
Ensure the cable is properly connected between your two machines. -
Check Kernel Messages:
sudo dmesg | grep -i thunderbolt
Expected output:
[ 7.832660] thunderbolt 1-2: new host found, vendor=0x8086 device=0x1 [ 7.832667] thunderbolt 1-2: Intel Corp. proxmox02
-
List Thunderbolt Devices:
ls /sys/bus/thunderbolt/devices/
You should see entries like:
0-0 1-0 1-2 domain0 domain1
Here, "0-0" is the local controller and "1-2" is the remote device.
In Proxmox Linux, the Thunderbolt module is typically loaded by default. Let's verify:
-
Check if Modules Are Loaded:
sudo lsmod | grep thunderbolt sudo lsmod | grep thunderbolt_net
-
If
thunderbolt_net
is Not Loaded:sudo modprobe thunderbolt_net
Note that the base
thunderbolt
module is almost always loaded automatically by Proxmox if you have Thunderbolt hardware.
This step is only needed if your system doesn't automatically load the thunderbolt_net
module:
-
Create a Modules-Load File with vi:
sudo vi /etc/modules-load.d/thunderbolt_net.conf
-
Insert the Following Line (press
i
to insert):thunderbolt_net
-
Save and Exit:
PressEsc
, type:wq
, then pressEnter
. -
Verify After Next Boot:
lsmod | grep thunderbolt_net
To ensure the interface gets a predictable name (e.g., eno3
):
-
Create the Link File Using vi:
sudo vi /etc/systemd/network/00-thunderbolt-eno3.link
-
Insert the Following Content:
[Match] Driver=thunderbolt-net [Link] Name=eno3 MACAddressPolicy=none
-
Save and Exit vi:
PressEsc
, then type:wq
, then pressEnter
.
-
Reload Udev Rules:
sudo udevadm control --reload
-
Trigger the Network Subsystem:
sudo udevadm trigger --subsystem-match=net
-
List Network Interfaces:
ip addr
Check for a new interface named
eno3
. -
Review Kernel Logs:
sudo dmesg | grep -i thunderbolt
Look for messages indicating the interface is ready.
Since your system uses /etc/network/interfaces
, configure the interface with a fixed IP (10.0.1.1).
-
Edit
/etc/network/interfaces
Using vi:sudo vi /etc/network/interfaces
-
Add or Modify the Thunderbolt Section:
allow-hotplug eno3 iface eno3 inet static address 10.0.1.1 netmask 255.255.255.0 pre-up ip link set $IFACE up
Using "allow-hotplug" ensures the config applies when the device is detected, and "pre-up" forces the interface up before applying IP settings.
-
Save and Exit vi:
PressEsc
, then type:wq
, and pressEnter
. -
Restart Networking:
sudo systemctl restart networking
-
Verify the IP Configuration:
ip addr show eno3
You should see the interface in the UP state with IP 10.0.1.1.
(Note: If the IP isn't applied immediately due to timing issues, proceed to Step 8.)
To mitigate timing issues where the interface isn't available when networking starts, create a systemd service that waits for eno3
and then brings it up and applies its configuration.
-
Create the Service File Using vi:
sudo vi /etc/systemd/system/thunderbolt-up.service
-
Insert the Following Content:
[Unit] Description=Force Thunderbolt Interface Up and Apply IP Configuration After=systemd-udev-settle.service network-online.target Wants=network-online.target [Service] Type=oneshot ExecStartPre=/bin/sh -c 'while ! ip link show eno3 > /dev/null 2>&1; do sleep 1; done' ExecStart=/sbin/ip link set eno3 up ExecStartPost=/sbin/ifup eno3 RemainAfterExit=yes [Install] WantedBy=multi-user.target
-
Save and Exit vi:
PressEsc
, then type:wq
, then pressEnter
. -
Enable and Start the Service:
sudo systemctl daemon-reload sudo systemctl enable thunderbolt-up.service sudo systemctl start thunderbolt-up.service
-
Verify Service Status:
sudo systemctl status thunderbolt-up.service
The service should complete successfully without the "Cannot find device" error.
-
Install iperf3 on Both Machines:
sudo apt install iperf3
-
On the First Machine (Server) with IP 10.0.1.1:
iperf3 -s
This starts iperf3 in server mode (listening on port 5201).
Example Server Output:
----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- Accepted connection from 10.0.1.2, port 39938 [ 5] local 10.0.1.1 port 5201 connected to 10.0.1.2 port 39948 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.49 GBytes 12.8 Gbits/sec [ 5] 1.00-2.00 sec 1.60 GBytes 13.7 Gbits/sec [ 5] 2.00-3.00 sec 1.60 GBytes 13.8 Gbits/sec [ 5] 3.00-4.00 sec 1.60 GBytes 13.8 Gbits/sec [ 5] 4.00-5.00 sec 1.61 GBytes 13.8 Gbits/sec [ 5] 5.00-6.00 sec 1.60 GBytes 13.8 Gbits/sec [ 5] 6.00-7.00 sec 1.61 GBytes 13.8 Gbits/sec [ 5] 7.00-8.00 sec 1.61 GBytes 13.8 Gbits/sec [ 5] 8.00-9.00 sec 1.61 GBytes 13.8 Gbits/sec [ 5] 9.00-10.00 sec 1.60 GBytes 13.8 Gbits/sec [ 5] 10.00-10.00 sec 1.86 MBytes 12.9 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 15.9 GBytes 13.7 Gbits/sec receiver ----------------------------------------------------------- Server listening on 5201 (test #2) -----------------------------------------------------------
-
On the Second Machine (Client) with IP 10.0.1.2:
iperf3 -c 10.0.1.1
If you've configured DNS or have entries in your hosts file, you can also use the hostname:
iperf3 -c proxmox -p 5201
Example Client Output:
Connecting to host proxmox, port 5201 [ 5] local 10.0.1.2 port 39948 connected to 10.0.1.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.49 GBytes 12.8 Gbits/sec 411 1.38 MBytes [ 5] 1.00-2.00 sec 1.60 GBytes 13.7 Gbits/sec 0 2.07 MBytes [ 5] 2.00-3.00 sec 1.60 GBytes 13.8 Gbits/sec 0 2.15 MBytes [ 5] 3.00-4.00 sec 1.60 GBytes 13.8 Gbits/sec 0 2.17 MBytes [ 5] 4.00-5.00 sec 1.61 GBytes 13.8 Gbits/sec 0 2.18 MBytes [ 5] 5.00-6.00 sec 1.60 GBytes 13.7 Gbits/sec 0 2.20 MBytes [ 5] 6.00-7.00 sec 1.61 GBytes 13.8 Gbits/sec 0 2.21 MBytes [ 5] 7.00-8.00 sec 1.61 GBytes 13.8 Gbits/sec 0 2.22 MBytes [ 5] 8.00-9.00 sec 1.61 GBytes 13.8 Gbits/sec 0 2.22 MBytes [ 5] 9.00-10.00 sec 1.60 GBytes 13.8 Gbits/sec 0 2.24 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 15.9 GBytes 13.7 Gbits/sec 411 sender [ 5] 0.00-10.00 sec 15.9 GBytes 13.7 Gbits/sec receiver iperf Done.
To evaluate how your Thunderbolt connection handles different types of workloads, it's useful to test with both large and small packet sizes. This simulates the difference between transferring a few large files versus many small files.
-
Testing with Large Packet Size (1MB):
iperf3 -c 10.0.1.1 -l 1M -t 60
This simulates large file transfers over a longer period (60 seconds).
Example Output with 1MB Packet Size:
[ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.60 GBytes 13.8 Gbits/sec 0 2.52 MBytes [ 5] 1.00-2.00 sec 1.61 GBytes 13.8 Gbits/sec 0 2.81 MBytes ... [ 5] 59.00-60.00 sec 1.60 GBytes 13.7 Gbits/sec 0 2.96 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-60.00 sec 96.0 GBytes 13.7 Gbits/sec 0 sender [ 5] 0.00-60.00 sec 96.0 GBytes 13.7 Gbits/sec receiver
-
Testing with Small Packet Size (4KB):
iperf3 -c 10.0.1.1 -l 4K -t 30
This simulates transferring many small files.
Example Output with 4KB Packet Size:
[ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.59 GBytes 13.7 Gbits/sec 1 4.62 MBytes [ 5] 1.00-2.00 sec 1.53 GBytes 13.2 Gbits/sec 352 2.26 MBytes ... [ 5] 29.00-30.00 sec 918 MBytes 7.70 Gbits/sec 5 1.41 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-30.00 sec 40.8 GBytes 11.7 Gbits/sec 363 sender [ 5] 0.00-30.00 sec 40.8 GBytes 11.7 Gbits/sec receiver
-
Key Observations from Packet Size Testing:
-
Large Packet Performance (1MB):
- Consistent 13.7-13.8 Gbits/sec throughput
- Extremely stable performance with minimal fluctuation
- Zero packet retransmissions over 60 seconds
- Congestion window stabilizes at around 2.96 MBytes
-
Small Packet Performance (4KB):
- Average 11.7 Gbits/sec throughput (approximately 15% lower)
- Much more variable performance (fluctuating between 7.7-13.7 Gbits/sec)
- 363 packet retransmissions
- Less stable congestion window
These results demonstrate that Thunderbolt networking performs best with larger packet sizes, which is typical for high-bandwidth, low-latency connections. When transferring many small files, expect somewhat reduced and more variable performance due to increased protocol overhead.
-
Bidirectional testing is important for evaluating the full capability of your Thunderbolt connection, as it simulates real-world scenarios where data flows in both directions simultaneously.
-
Run a Bidirectional Test:
iperf3 -c 10.0.1.1 -d
This command tests data transfer in both directions simultaneously, providing insights into how the connection performs under bidirectional load.
-
Example Bidirectional Test Output:
[ 5] 9.00-10.00 sec 1.61 GBytes 13.8 Gbits/sec 0 2.72 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 16.1 GBytes 13.8 Gbits/sec 0 sender [ 5] 0.00-10.00 sec 16.1 GBytes 13.8 Gbits/sec receiver
-
Understanding Advanced TCP Metrics:
During iperf3 tests, detailed TCP statistics are collected and can provide insights into connection quality. Here's an example of the underlying metrics from a successful test:
sent 131072 bytes of 131072, pending 0, total 17236885504 tcpi_snd_cwnd 1970 tcpi_snd_mss 1448 tcpi_rtt 1090 send_results { "cpu_util_total": 17.368918505871488, "cpu_util_user": 0.98325555975827161, "cpu_util_system": 16.38567294464443, "sender_has_retransmits": 1, "congestion_used": "cubic", "streams": [{ "id": 1, "bytes": 17236885504, "retransmits": 0, "jitter": 0, "errors": 0, "packets": 0, "start_time": 0, "end_time": 10.00006 }] }
-
Key TCP Metrics Explained:
-
Congestion Window (tcpi_snd_cwnd): 1970 packets indicates how many packets can be in transit before acknowledgment. Higher values generally mean better throughput.
-
Maximum Segment Size (tcpi_snd_mss): 1448 bytes is the maximum payload size per packet. This value affects efficiency - larger sizes typically mean less overhead.
-
Round Trip Time (tcpi_rtt): 1090 microseconds (just over 1ms) indicates extremely low latency, which is characteristic of direct Thunderbolt connections.
-
Congestion Algorithm: "cubic" is the default TCP congestion control algorithm in Linux.
-
CPU Utilization: ~17% total, with most being system time rather than user time, indicates efficient kernel-level processing.
-
Zero Retransmits: Confirms excellent connection quality and stability.
-
-
What These Metrics Mean for Performance:
-
Low RTT (1ms): Explains the high throughput despite a moderate congestion window size. Thunderbolt's direct PCIe-based connection creates minimal latency.
-
MSS/Buffer Efficiency: The system efficiently handles packets close to the ideal Ethernet size, minimizing fragmentation.
-
CPU Usage Pattern: The bias toward system time indicates that most processing happens in the kernel networking stack rather than in user space, which is expected for high-performance networking.
-
Stability: Zero retransmits during high-bandwidth transfers confirms the high reliability of Thunderbolt networking.
-
Bidirectional testing with the -d
flag is particularly valuable for evaluating Thunderbolt connections, as it tests both directions simultaneously - important for workloads like live data synchronization or active-active clustering configurations.
Thunderbolt 3/4 advertise a raw 40 Gbps bandwidth, but practical networking throughput is limited by:
- Half-Duplex Operation: Most implementations operate in half‑duplex (theoretically up to ~20 Gbps one way).
- Protocol Overhead: Ethernet framing and error checking reduce net throughput.
- System Constraints: CPU, DMA, and driver efficiencies further limit performance.
In our tests, 13.6 Gbps is within a realistic range. Although this may seem below the raw capacity, real-world factors (including half-duplex operation) usually limit effective throughput.
- IP Configuration Not Applying Immediately:
- Check that
thunderbolt-up.service
is enabled and running:sudo systemctl status thunderbolt-up.service
- Ensure the pre-start loop waits until the interface is created.
- Check that
- No New Network Interface:
- Verify that
thunderbolt_net
is loaded. - Confirm that the systemd .link file is correctly placed.
- Ensure udev rules have been reloaded with:
sudo udevadm control --reload && sudo udevadm trigger --subsystem-match=net
- Verify that
- Interface Configuration Issues:
- Double-check IP settings in
/etc/network/interfaces
.
- Double-check IP settings in
Following these steps should enable Thunderbolt networking on your Linux system with a fixed IP (10.0.1.1) on your Thunderbolt interface (now named eno3
). The systemd service (thunderbolt-up.service
) helps address timing issues, ensuring the interface is up and configured automatically without requiring a full reboot. The iperf3 instructions allow you to test throughput, and realistic performance is explained in context of Thunderbolt's half‑duplex operation and overhead.
This is simply awesome. Hats off to your effort.
I have a build coming up where I play to use Thunderbolt networking extensively. Your writing will help a lot!
May I ask, what was the kernel version on your devices?