Skip to content

Instantly share code, notes, and snippets.

@Ayke
Last active April 20, 2025 21:30
Show Gist options
  • Save Ayke/5f37ebdb84c758f57d7a3c8b847648bb to your computer and use it in GitHub Desktop.
Save Ayke/5f37ebdb84c758f57d7a3c8b847648bb to your computer and use it in GitHub Desktop.
Everything About CUDA in WSL2 Ubuntu

Prerequisites, i.e. the most important things

  1. Time of writing: Jan 18, 2023, updated on Sep 22, 2024. The following assumes that you're trying to install CUDA on WSL2 Ubuntu.

  2. Check support matrix first before you install any version of CUDA, because chances are the latest CUDA does not have cuDNN support yet, then you would have to re-install older version if you found out later.

    https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html#cudnn-cuda-hardware-versions

    At the time of writing, the latest cuDNN version is 8.7 and it supports CUDA 11.8.

  3. Windows 10 must be build 20145 or later, or you should be on Windows 11.

    At the time of writing (Sep 2024), the latest Windows 10 (22H2) Release Build is 19045.xxxx. This 20145 or later build does not seem to be inclucded in Official Release. To the best of the writer's knowledge, if your current Windows 10 build version is older than the target 20145 version, then from official options it is impossible to upgrade to this version, even if you have enrolled Insider Program.

    Your only shot will be to download unofficial images from uupdump or adguard then install. But I do not recommend it, because Windows 10 will find that your image has expired and you'll be receiving endless annoying alerts.

    The bottom line is, if your build version is older than 20145, you'd better just upgrade to Windows 11. You'll probably need to enable TPM 2.0 from BIOS for that.

  4. WSL must be WSL2

    At the time of writing (Jan 2023), WSL hasn't been supported by CUDA yet, so I don't know if there's any chance WSL1 will support CUDA. And even the CUDA support for WSL2 is somewhat limited too, so the chance that WSL1 can work with CUDA is slim.

  5. WSL2 Ubuntu Kernel version must be at least 4.19.121

    wsl cat /proc/version or cat /proc/version to check your WSL Ubuntu's kernel version. It must be at least 4.19.121, better be 5.10.16.3 or later.

    If your WSL Ubuntu kernel version is too old:

    sudo apt update && sudo apt upgrade
    sudo do-release-upgrade
    

    will upgrade your kernel.

Overall procedure

The key point to installation of CUDA in WSL2 is that, you MUST match three things together:

  • cuDNN version
  • Ubuntu CUDA runtime version
  • Windows CUDA library version

The fundamental relation is: Nvidia driver includes the actual CUDA capacity (in the above we call it CUDA library, for lack of a better word, it's in Windows Nvidia Driver in our scenario), and then the runtime instructs applications how to use it (in our scenario it's in Ubuntu). The cuDNN is an add-on to the runtime to provide applications with an easy way to run Deep Neural Networks on CUDA.

In short, you will install a Windows Nvidia driver, and a Linux CUDA runtime (not the driver, you only install one driver), and install cuDNN upon the Linux CUDA runtime.

Determine which version of driver to install

First you should look for the latest version of cuDNN, this determines all other stuff's version (Graphics Driver, CUDA Runtime etc.)

For example, if you open up https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html#cudnn-cuda-hardware-versions and find that the latest cuDNN is 9.4.0, and it supports CUDA Toolkit 11.x and 12.x, which corresponds to NVIDIA Driver Version for Windows >= 452.39 or NVIDIA Driver Version for Windows >= 527.41 accordingly.

Therefore you will download the cuDNN library (tarball) and a Windows Nvidia driver whose version satisfies the above requirement (depending on whether you'll use CUDA 11 or 12, let's say we go with CUDA 12, I'll download a driver whose version is greater or equal to 527.41)

Sometimes, cuDNN does not support the latest windows Nvidia driver (say it may not support latest 12.7), in this case you will need to install a windows Nvidia driver that has an earlier version of CUDA (say 12.6).

However, since Windows Nvidia Graphics Driver does not explicitly tell you which version of CUDA it includes, in this case you'll need to guess its CUDA version based on CUDA release date (https://developer.nvidia.com/cuda-toolkit-archive). Say 12.6.1 was released in August, your July-released Windows driver is likely 12.5.1.

After installing the driver, later steps will reveal its CUDA version, so if you have problem making sure the windows driver's CUDA version is correct, it's only a matter of a couple of more installations.

Overall Steps

  1. Pick the latest cuDNN, then look for the range of CUDA versions it supports.
  2. Check the release date of those CUDA versions
  3. Search & Install a Graphics Drivers whose CUDA is supposed to be supported by cuDNN.
  4. After installation of the Graphics Driver, in your Ubuntu bash run nvidia-smi (under /usr/lib/wsl/lib) and check its CUDA version.
  5. Install CUDA runtime of that version based on https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2
  6. Install cuDNN in WSL2 Ubuntu (again, tar installation recommended)

Of all the normal debug/status-check commands, nvidia-smi belongs to the Windows Driver, nvcc belongs to the Linux CUDA runtime.

Install Nvidia Windows Driver

I'll skip the part where you install Windows Grphic Driver. If you need to find your GPU's model, open up Nvidia Setting and you'll see it.

After you've install the Windows Driver, and after WSL2 has been installed:

Check if Windows Graphic Driver is installed right

Goto C:\Windows\System32\lxss\lib and see if the folder has got files. It should contain a lot of .so files (library files)

Also, in Ubuntu bash, which nvidia-smi should point to /usr/lib/wsl/lib/nvidia-smi, it shall be working before you install anything in WSL. Whatever nvidia-smi shows will be the actual CUDA library version in your Windows.

In other words, after Nvidia Windows Graphic Driver is installed and WSL2 is set up, C:\Windows\System32\lxss\lib should have been mounted in Linux subsystem as folder /usr/lib/wsl/lib. If it doesn't work, it's either your Nvidia driver's fault, or wsl2 installation's fault.

If it works, then you may proceed to install CUDA runtime, please install the exact same version of runtime as nvidia-smi has shown.

Install Linux CUDA Runtime

You'd better go with Option 1 from https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local and download the CUDA toolkit directly from their website(https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local), tell them you're installing Linux version, and it's WSL Ubuntu.

If you're going for the Option 2 and install it directly from linux package manager, note that you should not install the cuda, cuda-12-x, or cuda-drivers packages under WSL 2, as these packages will override the Nvidia driver you have installed. You should install the cuda-toolkit-XX-X metapackage only.

sudo apt list cuda-toolkit --all-versions

To see all available cuda-toolkit versions.

sudo apt-get install <package-name>=11.7.1-1

To install certain version of the package.

Reference: https://docs.nvidia.com/cuda/wsl-user-guide/index.html

Check if CUDA runtime is installed right

/usr/local/cuda/bin/nvcc --version should work, nvcc will tell you the CUDA runtime version, which should match the CUDA library version in nvidia-smi.

Make sure /usr/local/cuda is pointing to the right cuda runtime (the one whose version matches Windows Nvidia Driver you just installed, not any cuda runtime you may have installed before).

Install cuDNN

cuDNN is installed upon your Linux CUDA runtime, so you will download a Linux version of cuDNN and install it in WSL Linux.

https://developer.nvidia.com/rdp/cudnn-download

https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-linux

tar install is the most reliable way and easiest way to install cuDNN, I personally do not recommand deb installation.

To tar/tarball install, you just download the package, unzip it, and copy-paste it to the corresponding cuda folder.

Whatever that's in include, you copy it to /usr/local/cuda/include, whatever that's in lib/lib64, you copy it to /usr/local/cuda/lib64. Don't forget to chown and chmod if needed.

(See more in https://docs.nvidia.com/deeplearning/cudnn/latest/installation/linux.html#tarball-installation but it seems there's no much info there)

If you're installing cuDNN from deb, several useful commands: (the reason why I don't recommend it is that I've tried it)

To check all cuDNN versions in package manager:

sudo apt list libcudnn8 --all-versions

If you can't find FreeImage.h, try

sudo apt-get install libfreeimage3 libfreeimage-dev

To remove a repository

sudo dpkg -P <the deb you used to install>

And you may check /etc/apt/sources.list.d/ if your sudo apt update keeps getting blocked

Check if cuDNN is installed right

Download cudnn_samples for your cudnn version from https://developer.download.nvidia.com/compute/cudnn/redist/

It may be under the folder of cudnn_samples/source instead of cudnn_samples/linux-x86_64.

Enter one of the examples, say src/cudnn_samples_v9/mnistCUDNN, make, and then ./mnistCUDNN.

Path

You still need to set up $PATH so that the CUDA runtime may work correctly.

You should also make sure your path has included /usr/lib/wsl/lib.

If it's not in your path, probably your kernel version is not high enough. I do not recommend manually add /usr/lib/wsl/lib into your LD_LIBRARY_PATH because it should be automatically managed under /etc/ld.so.conf.d/ld.wsl.conf which is automatically generated by WSL. If you don't have it then it's probably a sign that your WSL needs update (or re-install after uninstall)

In ~/.bashrc, add the following paths for CUDA:

if [ -d $HOME/.local/bin ]; then
        export PATH=$HOME/.local/bin:$PATH
fi

export CUDA_HOME=/usr/local/cuda

if [ -d $CUDA_HOME/bin ]; then
        export PATH=$CUDA_HOME/bin:$PATH
fi

if [ -d $CUDA_HOME/lib64 ]; then
        export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
fi

then restart the terminal or source ~/.bashrc. The first path is actually for pip, but I included anyway for my personal reference XD.

Debug clues

Use pytorch's collect-env.py to verify your environment.

/usr/lib/wsl/bin is actually C:\Windows\System32\lxss\lib, your .so files got installed when you install Nvidia Graphics Driver in Windows.

Make sure your links such as /usr/local/cuda are pointing to the right position.

sudo apt-get install libfreeimage3 libfreeimage-dev will give you FreeImage.h header.

To test the overall installation, a CUDA sample run is still needed, I did not include it in this doc as I started using pytorch etc. which also proves that the installation works.

So to test it with CUDA sample:

https://github.com/nvidia/cuda-samples https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#mandatory-actions

@docmarionum1
Copy link

This was very helpful.

You have a typo in your "Path" section. My LD_LIBRARY_PATH wasn't updating.

if [ -d $CUDA_HOME/lib64n ]; then should be if [ -d $CUDA_HOME/lib64 ]; then

@Ayke
Copy link
Author

Ayke commented Oct 27, 2023

This was very helpful.

You have a typo in your "Path" section. My LD_LIBRARY_PATH wasn't updating.

if [ -d $CUDA_HOME/lib64n ]; then should be if [ -d $CUDA_HOME/lib64 ]; then

I'm glad it helped you! I've fixed the typo, thanks for letting me know!!

@arcolombo
Copy link

how does this work with python virtual environments by changing the paths? i am interested in GPU support in a virutal env

@inclinedadarsh
Copy link

But you need to make sure /usr/local/cuda is pointing to the right cuda runtime.

Hey can you please elaborate on this one?

@Ayke
Copy link
Author

Ayke commented Jul 23, 2024

But you need to make sure /usr/local/cuda is pointing to the right cuda runtime.

Hey can you please elaborate on this one?

Hello there! I've updated it to make it clearer

@sudarshan227
Copy link

sudarshan227 commented Sep 30, 2024

Really helpful guide but there is one very key point that is not correct. The CUDA version in WSL does not have to match the CUDA version in the NVIDIA windows driver for GPU to work correctly in WSL. For example, to use tensorflow, you have to use the supported CUDA and cuDNN versions of 12.3 and 8.9 respectively to successfully utilize the GPU. I was able to get tensorflow 2.17 to work in WSL with CUDA 12.3 and cuDNN 8.9 despite having the latest game ready NVIDIA driver with CUDA 12.6 (nvidia-smi showed 12.6, nvcc --version showed 12.3). I had zero success getting it to work with CUDA 12.6 and cuDNN 9 and had zero desire to downgrade my NVIDIA driver for other reasons. Luckily you can have the best of both worlds by not trying to adhere to always matching the CUDA version in the windows driver to the CUDA version you install in WSL.

@Ayke
Copy link
Author

Ayke commented Sep 30, 2024

Hellu sudarshan,

Thanks for the info! Yes it is possible that you can get a CUDA driver version 1 to work with CUDA runtime version 2, as long as the APIs are used have not changed between version 1 and version 2, theoretically speaking it is the case. However we have no way of knowing this compatibility matrix beforehand, as this is not officially supported (I may be wrong and would be happy to hear that I am wrong).

I'm sorry to hear that CUDA driver would not work with the same version of CUDA runtime in your case, I don't know why. It's really just I don't have any better reason to NOT use the same version of CUDA runtime and CUDA driver, and luckily it has been working for me.

Possibly we should not determine CUDA driver-runtime compatibility just by the simple 12.6 12.3, maybe we need to take further look into CUDA release note to find out (like https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html). Maybe we should look at

NVIDIA Linux Driver | 560.35.03 | x86_64, arm64-sbsa | Linux
NVIDIA Windows Driver | 560.94 | x86_64 (Windows) | Windows, WSL

to determine if the driver is compatible with the runtime, but unfortunately I haven't found out any feasible way for our case. Like the above doc is actually telling the compatibility between CUDA toolkit with CUDA driver, instead of the compatibility between CUDA runtime and CUDA driver.

@Ayke
Copy link
Author

Ayke commented Oct 7, 2024

Thank you so much for the help.

My pleasure! Glad that it helps people XD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment