A few people expressed interest in finding out how this went, so I thought I'd do a writeup of my experiences getting PCIe passthrough working with multiseat.
One of the more interesting things to note is that hot-plugging aside, it works fine with Vega, despite the card not being shown as resettable by the Arch wiki script.
I've mostly followed the Arch wiki, with additional sources linked throughout.
Multiseat enables multiple people to use the same computer simultaneously. This can reduce setup costs (you only need one motherboard, CPU, etc.) and improve resource utilization (if one seat is idle, the other can make full use of the CPU + memory). The only parts needed per seat are the screen, peripherals, graphics card and a USB soundcard (opt.)
PCIe passthrough allows you to connect one of the seats to virtualized Windows. My motivation for this was to play games without preventing my wife from using her seat.
I wasn't able to get re-binding to work with Vega, which means that I need to edit /etc/modprobe.d/vfio.conf
, regenerate my initd and reboot each time I want to switch the seat between Linux and Windows. In practice, this isn't particularly onerous. (I could reduce the effort needed by adding a GRUB entry with different kernel args, but I haven't gotten around to it yet.)
In the future (4.18 or later?) I wouldn't be surprised if this changes, given that 4.17 fixes some of the issues, and it already works perfectly for the RX 570.
Apart from that, it works pretty flawlessly. I haven't implemented any optimizations like CPU pinning, but the gaming performance is subjectively as good as native. (To be fair, I haven't been playing any CPU-intensive games, and the 1800X is overkill for almost every workload I've thrown at it.)
- Ryzen 7 1800X
- Asrock Taichi x370
- RX Vega 64
- RX 570
- distro: Sabayon
- kernel: Linux 4.16
- display manager: lightdm
- guest OS: Windows 10
-
QEMU doesn't support exposing a hyperthreaded AMD CPU yet, so I had to configure it with 8 cores, 1 thread each
-
I needed to add
options vfio_iommu_type1 allow_unsafe_interrupts=1
to modprobe.conf, otherwise I was getting the kernel error:vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param "allow_unsafe_interrupts" to enable VFIO IOMMU support on this platform
This is explained here. Tldr; it creates a vulnerability if you don't trust the guest OS. EDIT: This is only needed if you do not have
CONFIG_IRQ_REMAP
enabled in your kernel. See here for more info, or here -
if you get the TianoCore logo and it looks like it's frozen, but you're dropped into an EFI shell after a full minute, you probably need to fix the boot order
-
migrating an existing Windows installation from the host to the guest was unsuccessful for me. No idea why, but I got a black screen the moment it loaded the drivers. A clean installation using the virtio drivers from the start worked flawlessly.
-
Getting the network working was surprisingly painful - see this site for details. Note that using the pre-existing Docker bridge interface didn't work.
-
If your VM isn't able to get a DHCP address, you might need to set the following sysctl:
net.ipv4.ip_forward = 1
-
-
Sound
-
Add the following lines to the
<domain>
element of your VM<qemu:commandline> <qemu:env name='QEMU_AUDIO_DRV' value='pa'/> <qemu:env name='QEMU_PA_SERVER' value='/var/run/pulse/native'/> <!-- Without this, you get horrible crackling --> <qemu:env name='QEMU_PA_SAMPLES' value='8192'/> <qemu:env name='QEMU_AUDIO_TIMER_PERIOD' value='99'/> </qemu:commandline>
-
If you're running Pulseaudio in system mode (common for cooperative multiseat), you'll probably ahve this in
/etc/pulse/system.pa
: load-module module-native-protocol-unix auth-group=pulse-access auth-group-enable=1Since the VM runs as the
qemu
user by default, you'll need to add it to thepulse-access
group to get audio working. -
Audio is very choppy out of the box - you need to set Windows to use the same sample rate as Pulseaudio, as documented here. Even after this, I found the audio choppy until another seat had started.
-
-
If only one CPU core is detected, you might need to edit your VM like so to ensure that it starts with all the cores actually online: 4 4
The idea here was to be able to switch a seat between Windows and Linux without rebooting the host. This works flawlessly for the RX 570, but not for the RX Vega 64, which produces kernel errors from null pointer dereferencing. (This is improved, but not fixed in 4.17-rc2, in which unbinding works but rebinding is still broken.)
The Gentoo wiki and this blog have good info on how to do this dynamically. Note that any args set for the vfio-pci
module at boot are just the defaults, and can be removed using the unbind
files in /sys
.
The general approach is:
- remove userspace consumers of the graphics card with
loginctl terminate-seat seat1
- unbind the card from
amdgpu
and rebind it tovfio-pci
- start the VM
To move back to Linux, shutdown the VM, unbind it from vfio-pci
, and re-scan to re-bind it to amdgpu
. Logind will automatically recreate the seat, login screen and all.
- If you terminate seat0, LightDM will shutdown all the seats. This means that this seat must always run Linux.
- If there are any consumers of the graphics card when you attempt to rebind it, you'll get an error in the kernel log with a stack trace
- The kernel logs to the framebuffer count as a consumer, so you either need to disable them with
video=efifb:off
, or ensure they go toseat0
. - Despite running Linux 4.16, I suffered from the PCI reinit bug - couldn't get the VM to start a second time without putting the host to sleep for a second. (Sometimes it woke up by itself, sometimes I had to push the power button to wake it.)