Here's how to set up a Windows 10 virtual machine in KVM with PCI passthrough. The VM will have access to an NVIDIA graphics card while the host machine (running Debian Buster) uses Intel integrated graphics. This is mostly for my own reference so I don't forget how I did it.
- Intel i5 (an old one) with integrated graphics: this will be used as the graphics card for the host machine running Debian Buster
- NVIDIA Geforce 1070: this will be used as the graphics card for the Windows 10 VM
In order to do hardware passthrough with KVM at all, you need to enable the Intel Vt-d virtualization extensions. Edit /etc/default/grub
and edit the GRUB_CMDLINE_LINUX_DEFAULT
line so that it reads like:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
The intel_iommu=on
enables the Intel virtualization extensions for KVM. iommu=pt
turns on passthrough support. Then run
sudo grub-mkconfig -o /boot/grub/grub.cfg
to rebuild your Grub config.
Most types of devices can be used by the host and then passed through to the VM on demand once you actually start it. Graphics cards can't do that. If the host loads the driver for your card and starts talking to it then you can't pass it through to the VM. Linux will load the driver for any card that's plugged in even if it's not your default graphics card, so to get around that we need to tell the host's Linux kernel that we intend to use the NVIDIA card for a virtual machine, and we need to do that before the kernel gets the chance to load the driver for it (which in this case is the Nouveau open source driver).
There are a few ways to do that. You could tell the kernel to outright block the Nouveau module completely. I ended up instead telling it to wait to load the Nouveau module until after the card had already been initialized for use by VFIO passthrough. This will stop Nouveau from trying to do anything with it. Since the NVIDIA card also uses the Intel HDA module for audio output over HDMI we'll do the same thing with Intel HDA.
To do that, find the PCI ids for your GPU using:
lspci -vnn
In my case the id of the GPU is 10de:1b81
and the HDMI sound output is 10de:10f0
. Note the part beneath the GPU where it says Kernel driver in use: nouveau
. If everything works correctly that should change by the time we're done. To flag the card for use by VFIO, create the file /etc/modprobe.d/vfio.conf
with the contents:
softdep nouveau pre: vfio-pci
softdep snd_hda_intel pre: vfio-pci
options vfio-pci ids=10de:1b81,10de:10f0
Then run
sudo update-initramfs -u
to update the boot filesystem image with that config. You'll need to reboot at this point. To check if everything worked correctly, run lspci -vnn
again and find the GPU. Beneath both of the NVIDIA devices we passed through you should see Kernel driver in use: vfio-pci
.
I recommend using virt-manager
and setting up a regular Windows 10 VM using the default QXL video card before trying to do any passthrough stuff. When creating the VM, make sure to select "Customize before install" and set the Firmware option to "UEFI". Create the VM and go through the Windows installer until you have a working Windows 10 installation with no GPU passthrough, then shut down the VM.
In virt-manager
, go to your VM settings and click "Add Hardware", then "PCI Host Device". This will give you a list of all your PCI devices where you can select the NVIDIA GPU and click "Finish" to add it. Repeat the process for the NVIDIA Audio Controller.
You can attempt to launch the VM at this point, but if you do, Windows will install the NVIDIA driver but the card still won't work. If you go into Device Manager in Windows, you'll see the NVIDIA card with a little yellow caution icon and opening the device properties will reveal an enigmatic "Code 43" error.
This error seems to happen because the NVIDIA driver realizes that it's running inside a VM and will disable itself. Since we don't want that we need to "hide" the fact that there's a VM from the driver. KVM has a mechanism for doing that but it's not exposed in virt-manager
, so we'll need to edit the XML config for the virtual machine manually. To do that, run:
sudo virsh edit win10
where win10
is the name of the VM that you gave when you created it inside virt-manager
. You'll need to edit the contents of the <features>
tag in the following way:
Inside the <hyperv>
tag: add the line:
<vendor_id state='on' value='1234567890ab'/>
(the actual value of the vendor_id
is arbitrary, but it should be a 12 digit hex number).
Inside the <kvm>
tag: add the line:
<hidden state='on'/>
Inside the <features>
tag: add the line:
<ioapic driver='kvm'/>
The end result should look something like:
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vendor_id state='on' value='1234567890ab'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
<vmport state='off'/>
<ioapic driver='kvm'/>
</features>
If you boot the machine up again, the NVIDIA driver should actually work! Windows will probably default to using the GPU as the primary card, which means that the Windows login prompt will likely appear on the display connected to the video card rather than the QXL display that you can see in virt-manager
.
PCI passthrough on the Arch Linux wiki
VGA passthrough on Debian wiki
Heiko Siegler: running Windows 10 on Linux using KVM with VGA passthrough