This is my documentation for what worked in my homelab for setting up computer with two GPUs for gaming and plex transcoding. Not all steps may be required!
- MSI Z590 Wifi Pro, BIOS version
7D09v18
- Intel i5-11400 (6-core w/ Intel 730 iGPU)
- Nvidia 3090 (Asus "ROG STRIX RTX3090 24G GAMING"), VBIOS version
94.02.42.00.A9
- Others: 1TB NVMe drive, 32GB RAM, USB peripherals & monitor
- Proxmox 7.2-11 (host)
- Windows 11 (guest)
- LXC containers with iGPU mapped (guest)
- VM running Windows 11 where I can:
- Use the physical monitor (HDMI<->3090) and USB peripherals, passed through, for in-person gaming.
- Able to connect via Nvidia Gamestream for remote gaming.
- Able to connect via RDP for usage as a workstation.
- Installed on the NVMe.
- LXC container with iGPU mapped to support Plex transcoding (running Plex in docker).
These are the relevant settings in my BIOS:
- Settings
- Advanced
- PCIe/PCI Sub-system settings
- Max TOLUD: Dynamic
- Re-size BAR support: Enabled
- SR-IOV Support: Enabled
- Native PCIE Enable: Enabled
- Native ASPM: Auto
- Integrated Graphics Configuration
- Initiate Graphic Adapter: IGD
- Integrated Graphics Share Memory: 32MB
- BIOS CSM/UEFI mode: UEFI
- PCIe/PCI Sub-system settings
- Boot
- Fast boot: Disabled
- Advanced
- OC
- CPU features
- Intel virtualization tech: Enabled
- Intel VT-D Tech: enabled
- Control IOMMU Pre-boot behavior: Enable
- DMA Control gaurantee: enabled
- CPU features
- Install Proxmox (version used listed above).
- Run the post-install script at tteck.github.io/Proxmox.
- Unknown: update to latest kernel via
apt install pve-kernel-5.19
and reboot.
The later steps in this doc should be checked and modified according to your setup (even if you have the same hardware) based on information you discover here.
lspci -v
to list PCI devices. Note the following:
- The IDs of the video and audio Nvidia card (e.g., 10de:2204 and 10de:1aef) - these are used anywhere IDs are passed below.
- Which kernel modules they load (you'll want to blacklist them).
- Which IOMMU group they belong to. You'll want them to be the same group, with no other devices in that group, after running some commands below. If they are already the same group and no other devices are in that group, do not add the
pcie_acs_override
flag to the GRUB options below. A faster / better way to check may be tols -l /sys/kernel/iommu_groups/**/devices
.
On the proxmox host:
nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream initcall_blacklist=sysfb_init nofb video=vesafb:off,simplefb:off,efifb:off rd.driver.blacklist=snd_hda_intel,nvidia,nvidiafb,nouveau module.blacklist=snd_hda_intel,nvidia,nvidiafb,nouveau vfio-pci.ids=10de:2204,10de:1aef"
- Save and then run
update-grub
.
On the proxmox host, add the following to /etc/modules
:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
vfio_nvidia
On the proxmox host, add the following to /etc/modprobe.d/blacklist.conf
:
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist snd_hda_intel
Add the following to /etc/modprobe.d/vfio.conf
:
options vfio-pci ids=10de:2204,10de:1aef disable_vga=1 disable_idle_d3=1 initcall_blacklist=sysfb_init
We can probably get away with not doing the last one?
Finally, run: update-initramfs -u -k all
reboot
and log into proxmox via ssh. Let's check a few things.
lspci -v
that we ran earlier should show you what kernel driver is being used for both video and audio. Important that both are being bound to vfio-pci
. If not, then figure out how to get this bound (are you missing a blacklist?)
cat /proc/iomem
is a good wath to check. Look for your PCI ID -- if there are any entries under it, you need to figure out how to stop them from capturing the GPU, so that your VM can use it. e.g.:
6050000000-605fffffff : 0000:01:00.0
6050000000-605fffffff : something-capturing-your-gpu-here
With properties:
- BIOS: q35
- Display: Default
- SCSI controller: VirtIO SCSI Single
- Map whatever USB passthroughs you want
- Add PCI device with
All Functions
,ROM-Bar
,PCI-Express
checked. - Edit your vm conf (
/etc/pve/qemu-server
) and appendromfile=Asus.RTX3090.24576.210308_modded.rom
(google how to modify the rom) - Also run
qm set 100 -scsi1 /dev/disk/by-id/nvme-YOUR_ID_GOES_HERE
- I like to set options to avoid net boot. It's annoying.
- Don't forget to use the VirtIO drivers
- VirtIO install order: Balloon, NetKVM, vioscsi
- After install finishes, first thing is to install the rest of the drivers off CD.
- Install guest agent.
- Run windows update
- Remove Windows Hello login requirement from your account if necessary
- Log out, log in using password
- Turn on remote desktop
- Test remote desktop
- Check device manager to make sure you've eliminated most unknown drivers (there was one I couldn't get rid of from the VirtIO side)
- Install the Nvidia drivers from their website.
- Shutdown and remove the default display, mark the Nvidia PCI card as primary GPU.
- Boot up, hopefully everything should work!