First of all, enable the Nvidia RPM Fusion repo. This file should already exist on your system. Alternatively, you can enable it from GNOME Software or KDE Welcome Center. 1
sudo sed -ie 's/enabled=0/enabled=1/g' /etc/yum.repos.d/rpmfusion-nonfree-nvidia-driver.repoAlso add the Nvidia container toolkit repo. 2 Nvidia container toolkit is needed if you want to use CUDA inside Docker/Podman container later. However, you usually don't need the CUDA toolkit/compiler itself. 3
# Add Nvidia container toolkit repo
curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo
# Refresh the repo metadata
sudo rpm-ostree refresh-mdNow it's time to install our Nvidia packages. In this case, we're going to install akmod-nvidia, xorg-x11-drv-nvidia, xorg-x11-drv-nvidia-cuda, and nvidia-container-toolkit. Although the Nvidia driver package name starts with xorg, we actually can still use it on Wayland (named like that due to the history).
Also, depending on the GPU you have, you may need to use the older Nvidia driver (e.g. xorg-x11-drv-nvidia-380xx, akmod-nvidia-380xx) instead of the latest one (xorg-x11-drv-nvidia). Installing the wrong driver version can lead to unstable/broken system.
And while we're at it, we can install distrobox 4 too (optional). Distrobox is usually used for creating a container (containing common dependencies like Python, etc.) that will be shared among multiple projects. Alternatively, you can just use Dev Containers (e.g. on VS Code) with/without Distrobox.
You can see the combined install command below.
# Install layered packages on top of the system (also includes distrobox)
sudo rpm-ostree install akmod-nvidia xorg-x11-drv-nvidia xorg-x11-drv-nvidia-cuda nvidia-container-toolkit distrobox
# Alternative if you want to use distrobox without layering
curl -s https://raw.githubusercontent.com/89luca89/distrobox/release/install | sh -s -- --prefix ~/.localThen we also need to add these kernel parameters below for preventing the default nouveau driver from loading.
# Append kernel parameters
sudo rpm-ostree kargs --append=rd.driver.blacklist=nouveau --append=modprobe.blacklist=nouveau
# You can also use `sudo EDITOR=nano rpm-ostree kargs --editor` for more flexible editingNote that adding nvidia-drm.modeset=1 parameter is usually not necessary as it's already the default behavior. Check this with sudo cat /sys/module/nvidia_drm/parameters/modeset if you're not sure. 5
After everything is done, restart the system and check whether Nvidia driver is installed correctly.
# To check current kernel parameters
cat /proc/cmdline
# To check if Nvidia driver is used instead of nouveau
lspci -v
# To check if Nvidia is actually working
nvidia-smi
# To check the installed Nvidia packages (and the dependencies)
rpm-ostree status -v
rpm -qa | grep nvidiaI don't think it's a good idea to install another layered package (VS Code) to the system, since the layering process is already slow enough with the added Nvidia driver. Instead, I'm going to use Flatpak VS Code, 6 despite the issues 7 (which is irrelevant since I'm using container anyway).
flatpak install com.visualstudio.code
# Whitelist the /tmp folder to make "Dev Containers" work
flatpak --user override --filesystem=/tmp com.visualstudio.code
# Run VS Code (can also be launched from start menu)
flatpak run com.visualstudio.code
# [Manual] Install "Dev Containers" extension on VS Code
# Add Podman wrapper for Distrobox
mkdir -p ~/.var/app/com.visualstudio.code/data/node_modules/bin
ln -sf /app/bin/host-spawn ~/.var/app/com.visualstudio.code/data/node_modules/bin/bash
ln -sf /app/bin/host-spawn ~/.var/app/com.visualstudio.code/data/node_modules/bin/podman
ln -sf /app/bin/host-spawn ~/.var/app/com.visualstudio.code/data/node_modules/bin/docker-compose
# [Manual] Change the "Dev › Containers: Docker Path" settings on VS Code to "podman"To make VS Code Flatpak app recognize our executables, the easiest way is to link them via host-spawn like above. Don't worry if /app/bin doesn't exist, that's just how Flatpak sandboxing works internally. As for the missing docker-compose, we can install it via Podman Desktop (Flatpak) later.
Although Distrobox said you can use GPU inside the container, 8 you actually need to generate a config file first 2 or the GPU won't be detected by the container. If you're using Podman, this is the right command: 9
# nvidia-ctk is part of nvidia-container-toolkit
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
# Check the generated devices
nvidia-ctk cdi listNow create the container using Distrobox (or restart it if you already created a container before). I recommend using a separate home directory since this will directory will be shared directly (with read-write access) to the container. Meaning that installing user libraries/packages in the container will install it into your host home folder too.
# Create a custom home directory (you can use any directory)
mkdir -p ~/.distrobox/home
# Folders that will be shared with the container
ln -s ~/Documents ~/.distrobox/home/Documents
ln -s ~/Downloads ~/.distrobox/home/Downloads
# Create and download the container (I use Arch btw)
distrobox create --nvidia --name arch --image archlinux:latest --home /home/<user>/.distrobox/home
# Run the container and access the shell (you can also launch from start menu)
distrobox enter archIf you don't want to use CLI, you can install GUI app like Distroshelf and Podman Desktop from Flatpak.
flatpak install flathub com.ranfdev.DistroShelf
flatpak install flathub io.podman_desktop.PodmanDesktopAlso, for non-distro images such as PostgreSQL and Apache softwares (which is usually the case if you're using Dev Containers), you may want to set this config to avoid SELinux and permission issues 10 (for rootless Podman):
# ~/.config/containers/containers.conf
[containers]
env = ["BUILDAH_FORMAT=docker"]
label = false
userns = "keep-id"To connect to the existing Distrobox container from VS Code, you can simply select the container from the sidebar, or use Ctrl + Shift + P and choose "Attach to Running Container...". After that, open your actual project directory (e.g. on the Documents folder) when you're already inside the Distrobox container.
However, if you don't want to Distrobox and prefer Dev Containers instead, you need to create a .devcontainer/devcontainer.json file in your project folder. Refer to containers.dev for the template example, it's pretty easy to setup once you get the hang of it.
I assume you already know the next step. For example, for simply testing PyTorch, you can use the commands below.
# Execute on VS Code (container) terminal/shell
sudo pacman -Sy python
pip install torch --index-url https://download.pytorch.org/whl/cu130
python -c 'import torch; print(torch.cuda.is_available())'If the output is True, then the GPU is passed correctly to the container. Without PyTorch, you can also simply run nvidia-smi within the container.
Footnotes
-
https://docs.fedoraproject.org/en-US/quick-docs/rpmfusion-setup ↩
-
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html ↩ ↩2
-
https://github.com/NVIDIA/nvidia-container-toolkit#getting-started ↩
-
https://wiki.archlinux.org/title/NVIDIA#DRM_kernel_mode_setting ↩
-
https://distrobox.it/posts/integrate_vscode_distrobox/#from-flatpak ↩
-
https://distrobox.it/useful_tips/#using-the-gpu-inside-the-container ↩
-
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html ↩
-
https://github.com/flathub/com.visualstudio.code/issues/55 ↩