-
Time of writing: Jan 18, 2023, updated on Sep 22, 2024. The following assumes that you're trying to install CUDA on WSL2 Ubuntu.
-
Check support matrix first before you install any version of CUDA, because chances are the latest CUDA does not have cuDNN support yet, then you would have to re-install older version if you found out later.
https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html#cudnn-cuda-hardware-versions
At the time of writing, the latest cuDNN version is 8.7 and it supports CUDA 11.8.
-
Windows 10 must be build
20145
or later, or you should be on Windows 11.At the time of writing (Sep 2024), the latest Windows 10 (22H2) Release Build is
19045.xxxx
. This20145
or later build does not seem to be inclucded in Official Release. To the best of the writer's knowledge, if your current Windows 10 build version is older than the target 20145 version, then from official options it is impossible to upgrade to this version, even if you have enrolled Insider Program.Your only shot will be to download unofficial images from uupdump or adguard then install. But I do not recommend it, because Windows 10 will find that your image has expired and you'll be receiving endless annoying alerts.
The bottom line is, if your build version is older than
20145
, you'd better just upgrade to Windows 11. You'll probably need to enable TPM 2.0 from BIOS for that. -
WSL must be WSL2
At the time of writing (Jan 2023), WSL hasn't been supported by CUDA yet, so I don't know if there's any chance WSL1 will support CUDA. And even the CUDA support for WSL2 is somewhat limited too, so the chance that WSL1 can work with CUDA is slim.
-
WSL2 Ubuntu Kernel version must be at least
4.19.121
wsl cat /proc/version
orcat /proc/version
to check your WSL Ubuntu's kernel version. It must be at least4.19.121
, better be5.10.16.3
or later.If your WSL Ubuntu kernel version is too old:
sudo apt update && sudo apt upgrade sudo do-release-upgrade
will upgrade your kernel.
The key point to installation of CUDA in WSL2 is that, you MUST match three things together:
- cuDNN version
- Ubuntu CUDA runtime version
- Windows CUDA library version
The fundamental relation is: Nvidia driver includes the actual CUDA capacity (in the above we call it CUDA library, for lack of a better word, it's in Windows Nvidia Driver in our scenario), and then the runtime instructs applications how to use it (in our scenario it's in Ubuntu). The cuDNN is an add-on to the runtime to provide applications with an easy way to run Deep Neural Networks on CUDA.
In short, you will install a Windows Nvidia driver, and a Linux CUDA runtime (not the driver, you only install one driver), and install cuDNN upon the Linux CUDA runtime.
First you should look for the latest version of cuDNN, this determines all other stuff's version (Graphics Driver, CUDA Runtime etc.)
For example, if you open up https://docs.nvidia.com/deeplearning/cudnn/latest/reference/support-matrix.html#cudnn-cuda-hardware-versions
and find that the latest cuDNN is 9.4.0, and it supports CUDA Toolkit 11.x and 12.x, which corresponds to NVIDIA Driver Version for Windows >= 452.39
or NVIDIA Driver Version for Windows >= 527.41
accordingly.
Therefore you will download the cuDNN library (tarball) and a Windows Nvidia driver whose version satisfies the above requirement (depending on whether you'll use CUDA 11 or 12, let's say we go with CUDA 12, I'll download a driver whose version is greater or equal to 527.41)
Sometimes, cuDNN does not support the latest windows Nvidia driver (say it may not support latest 12.7), in this case you will need to install a windows Nvidia driver that has an earlier version of CUDA (say 12.6).
However, since Windows Nvidia Graphics Driver does not explicitly tell you which version of CUDA it includes, in this case you'll need to guess its CUDA version based on CUDA release date (https://developer.nvidia.com/cuda-toolkit-archive). Say 12.6.1 was released in August, your July-released Windows driver is likely 12.5.1.
After installing the driver, later steps will reveal its CUDA version, so if you have problem making sure the windows driver's CUDA version is correct, it's only a matter of a couple of more installations.
- Pick the latest cuDNN, then look for the range of CUDA versions it supports.
- Check the release date of those CUDA versions
- Search & Install a Graphics Drivers whose CUDA is supposed to be supported by cuDNN.
- After installation of the Graphics Driver, in your Ubuntu bash run
nvidia-smi
(under/usr/lib/wsl/lib
) and check its CUDA version. - Install CUDA runtime of that version based on https://docs.nvidia.com/cuda/wsl-user-guide/index.html#cuda-support-for-wsl-2
- Install cuDNN in WSL2 Ubuntu (again, tar installation recommended)
Of all the normal debug/status-check commands, nvidia-smi
belongs to the Windows Driver, nvcc
belongs to the Linux CUDA runtime.
I'll skip the part where you install Windows Grphic Driver. If you need to find your GPU's model, open up Nvidia Setting and you'll see it.
After you've install the Windows Driver, and after WSL2 has been installed:
Goto C:\Windows\System32\lxss\lib
and see if the folder has got files. It should contain a lot of .so
files (library files)
Also, in Ubuntu bash, which nvidia-smi
should point to /usr/lib/wsl/lib/nvidia-smi
, it shall be working before you install anything in WSL. Whatever nvidia-smi
shows will be the actual CUDA library version in your Windows.
In other words, after Nvidia Windows Graphic Driver is installed and WSL2 is set up, C:\Windows\System32\lxss\lib
should have been mounted in Linux subsystem as folder /usr/lib/wsl/lib
. If it doesn't work, it's either your Nvidia driver's fault, or wsl2 installation's fault.
If it works, then you may proceed to install CUDA runtime, please install the exact same version of runtime as nvidia-smi
has shown.
You'd better go with Option 1 from https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local and download the CUDA toolkit directly from their website(https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local), tell them you're installing Linux version, and it's WSL Ubuntu.
If you're going for the Option 2 and install it directly from linux package manager, note that you should not install the cuda
, cuda-12-x
, or cuda-drivers
packages under WSL 2, as these packages will override the Nvidia driver you have installed.
You should install the cuda-toolkit-XX-X
metapackage only.
sudo apt list cuda-toolkit --all-versions
To see all available cuda-toolkit versions.
sudo apt-get install <package-name>=11.7.1-1
To install certain version of the package.
Reference: https://docs.nvidia.com/cuda/wsl-user-guide/index.html
/usr/local/cuda/bin/nvcc --version
should work, nvcc
will tell you the CUDA runtime version, which should match the CUDA library version in nvidia-smi
.
Make sure /usr/local/cuda
is pointing to the right cuda runtime (the one whose version matches Windows Nvidia Driver you just installed, not any cuda runtime you may have installed before).
cuDNN is installed upon your Linux CUDA runtime, so you will download a Linux version of cuDNN and install it in WSL Linux.
https://developer.nvidia.com/rdp/cudnn-download
https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#install-linux
tar install is the most reliable way and easiest way to install cuDNN, I personally do not recommand deb
installation.
To tar/tarball install, you just download the package, unzip it, and copy-paste it to the corresponding cuda folder.
Whatever that's in include
, you copy it to /usr/local/cuda/include
,
whatever that's in lib
/lib64
, you copy it to /usr/local/cuda/lib64
.
Don't forget to chown
and chmod
if needed.
(See more in https://docs.nvidia.com/deeplearning/cudnn/latest/installation/linux.html#tarball-installation but it seems there's no much info there)
If you're installing cuDNN from deb, several useful commands: (the reason why I don't recommend it is that I've tried it)
To check all cuDNN versions in package manager:
sudo apt list libcudnn8 --all-versions
If you can't find FreeImage.h
, try
sudo apt-get install libfreeimage3 libfreeimage-dev
To remove a repository
sudo dpkg -P <the deb you used to install>
And you may check /etc/apt/sources.list.d/
if your sudo apt update
keeps getting blocked
Download cudnn_samples for your cudnn version from https://developer.download.nvidia.com/compute/cudnn/redist/
It may be under the folder of cudnn_samples/source
instead of cudnn_samples/linux-x86_64
.
Enter one of the examples, say src/cudnn_samples_v9/mnistCUDNN
, make
, and then ./mnistCUDNN
.
You still need to set up $PATH so that the CUDA runtime may work correctly.
You should also make sure your path has included /usr/lib/wsl/lib
.
If it's not in your path, probably your kernel version is not high enough.
I do not recommend manually add /usr/lib/wsl/lib
into your LD_LIBRARY_PATH
because it should be automatically managed under /etc/ld.so.conf.d/ld.wsl.conf
which is automatically generated by WSL.
If you don't have it then it's probably a sign that your WSL needs update (or re-install after uninstall)
In ~/.bashrc
, add the following paths for CUDA:
if [ -d $HOME/.local/bin ]; then
export PATH=$HOME/.local/bin:$PATH
fi
export CUDA_HOME=/usr/local/cuda
if [ -d $CUDA_HOME/bin ]; then
export PATH=$CUDA_HOME/bin:$PATH
fi
if [ -d $CUDA_HOME/lib64 ]; then
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
fi
then restart the terminal or source ~/.bashrc
. The first path is actually for pip
, but I included anyway for my personal reference XD.
Use pytorch's collect-env.py to verify your environment.
/usr/lib/wsl/bin
is actually C:\Windows\System32\lxss\lib
, your .so
files got installed when you install Nvidia Graphics Driver in Windows.
Make sure your links such as /usr/local/cuda
are pointing to the right position.
sudo apt-get install libfreeimage3 libfreeimage-dev
will give you FreeImage.h
header.
To test the overall installation, a CUDA sample run is still needed, I did not include it in this doc as I started using pytorch etc. which also proves that the installation works.
So to test it with CUDA sample:
https://github.com/nvidia/cuda-samples https://docs.nvidia.com/cuda/cuda-installation-guide-linux/#mandatory-actions
This was very helpful.
You have a typo in your "Path" section. My
LD_LIBRARY_PATH
wasn't updating.if [ -d $CUDA_HOME/lib64n ]; then
should beif [ -d $CUDA_HOME/lib64 ]; then