Skip to content

Instantly share code, notes, and snippets.

@OlfillasOdikno
Last active August 7, 2025 18:27
Show Gist options
  • Save OlfillasOdikno/f87a4444f00984625558dad053255ace to your computer and use it in GitHub Desktop.
Save OlfillasOdikno/f87a4444f00984625558dad053255ace to your computer and use it in GitHub Desktop.
Hyperv Linux Guest GPU PV

Hyperv Linux Guest GPU PV

  • Create VM
$isopath = <iso location>
$vhdpath = <vhdx location>
$vmpath = <vm path>
$vmname = "Arch-dxgkrnl"
New-VM -Name $vmname -MemoryStartupBytes 8GB -BootDevice VHD -NewVHDPath $vhdpath -Path $vmpath -NewVHDSizeBytes 20GB -Generation 2 -Switch "Default Switch"
Set-VM -Name  $vmname -CheckpointType Disabled
Set-VMMemory $vmname -DynamicMemoryEnabled $false
Add-VMDvdDrive -VMName  $vmname -Path $isopath
Set-VMFirmware -VMName $vmname -EnableSecureBoot Off -FirstBootDevice (Get-VMDvdDrive -VMName $vmname)[0]
Add-VMGpuPartitionAdapter -VMName $vmname
Set-VMGpuPartitionAdapter -VMName $vmname -MinPartitionVRAM 1
Set-VMGpuPartitionAdapter -VMName $vmname -MaxPartitionVRAM 11
Set-VMGpuPartitionAdapter -VMName $vmname -OptimalPartitionVRAM 10
Set-VMGpuPartitionAdapter -VMName $vmname -MinPartitionEncode 1
Set-VMGpuPartitionAdapter -VMName $vmname -MaxPartitionEncode 11
Set-VMGpuPartitionAdapter -VMName $vmname -OptimalPartitionEncode 10
Set-VMGpuPartitionAdapter -VMName $vmname -MinPartitionDecode 1
Set-VMGpuPartitionAdapter -VMName $vmname -MaxPartitionDecode 11
Set-VMGpuPartitionAdapter -VMName $vmname -OptimalPartitionDecode 10
Set-VMGpuPartitionAdapter -VMName $vmname -MinPartitionCompute 1
Set-VMGpuPartitionAdapter -VMName $vmname -MaxPartitionCompute 11
Set-VMGpuPartitionAdapter -VMName $vmname -OptimalPartitionCompute 10
Set-VM -GuestControlledCacheTypes $true -VMName $vmname
Set-VM -LowMemoryMappedIoSpace 1Gb -VMName $vmname
Set-VM -HighMemoryMappedIoSpace 32GB -VMName $vmname
Start-VM -Name $vmname
  • install arch with sway the normal way

  • install dependencies

sudo pacman -S git base-devel

git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si
  • start sshd
sudo systemctl start sshd
  • for nvidia:
mkdir drivers
WSLAttachSwitch.exe "Default Switch"

do now in WSL

sudo dhclient eth1
  • copy proretary files
scp -r /usr/lib/wsl/lib arch-dxgkrnl:~/
  • for nvidia:
ssh arch-dxgkrnl sudo -S mkdir -p $(echo /usr/lib/wsl/drivers/nvle.inf_amd64_*/)
scp -r /usr/lib/wsl/drivers/nvle.inf_amd64_*/*.so* arch-dxgkrnl:~/drivers
ssh arch-dxgkrnl

sudo mv lib/* /usr/lib
sudo ln -s /lib/libd3d12core.so /lib/libD3D12Core.so
  • for nvidia:
sudo cp -r drivers/* /usr/lib/wsl/drivers/nvle.inf_amd64_*
  • install dxg dkms
cd ~
git clone https://github.com/OlfillasOdikno/dxgkrnl-dkms.git && cd dxgkrnl-dkms && makepkg -si

sudo modprobe dxgkrnl

/dev/dxg should now exist

  • install mesa-d3d12
yay -S mesa-d3d12 mesa-utils xorg-xwayland

Now in hyperv:

export MESA_LOADER_DRIVER_OVERRIDE=d3d12
export WLR_RENDERER_ALLOW_SOFTWARE=1
LIBGL_ALWAYS_SOFTWARE=1 sway

Now execute glxinfo in sway:

LIBGL_ALWAYS_SOFTWARE=0 glxinfo -B

It should show your graphics card

name of display: :0
NVD3D10: CPU cyclestats are disabled on client virtualization
NVD3D10: CPU cyclestats are disabled on client virtualization
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Microsoft Corporation (0xffffffff)
    Device: D3D12 (NVIDIA GeForce RTX 3060) (0xffffffff)       <----- here
    Version: 21.2.5
    Accelerated: yes
    Video memory: 28458MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 3.3
    Max compat profile version: 3.1
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.0
OpenGL vendor string: Microsoft Corporation
OpenGL renderer string: D3D12 (NVIDIA GeForce RTX 3060)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 21.2.5
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 3.1 Mesa 21.2.5
OpenGL shading language version string: 1.40
OpenGL context flags: (none)

OpenGL ES profile version string: OpenGL ES 3.0 Mesa 21.2.5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
@eric-gitta-moore
Copy link

@Nislaco awesome work!
I would like to know if CUDA and Tensor Core can operate normally.

@Nislaco
Copy link

Nislaco commented Aug 7, 2025

As far as I know it should work correctly, I have run AI programs like Ollama and Stable Diffusion/Riffusion and they work without errors/issues.

The containers provided by Nvidia should also work. However, I have not tested on my end.

I tend to prefer using SSH forwarding and use the apps webgui's through another machine.

If you have multiple nvidia cards in your host, you might need to pass all of them in order for nvidia-smi to function correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment