Just some tips I gathered over time. All in one easily reachable place so I can share it wherever I want.
Please note that unless you see a shebang (#!/...) these code blocks are meant to be copy & pasted.
Some of the steps will not work if you run part of them in a script and copy paste other ones.
- Proxmox VE tips
- Table of contents
- Discard
- Preventing a full storage
- Useful installer shortcuts/tips
- Temporary kernel arguments
- Passthrough recovery
- Passthrough tips
- Rescanning disks/volumes
- Making KSM start sooner
- Enabling a VM's serial console
- Importing disk images
- Networking
- GPU passthrough
- Install intel drivers/modules
- Install nvidia drivers/modules via apt
- Install nvidia drivers/modules via .run file
- Install and configure NVIDIA Container Toolkit
- ZFS tips
- Misc tips and scripts
- Find unused disks/volumes
- Restore guest configs
- Monitor disk SMART information
- Credentials
- Monitor swap usage
- Check which PCI(e) device a drm device belongs to
- Persistent renderd12*/card* or other device names
- Check which PCI(e) device a disk belongs to
- IO debugging
- Set up no-subscription apt repositories
- Fix locales
- Enable package notifications
- FAQ
Using trim/discard with thinly allocated disks (which is the default) gives space back to the storage. This saves space, makes backups faster and is needed for thin allocation to work as expected. This is not related to the PVE storage being backed by a SSD. Use it whenever the storage is thin provisioned. For ZFS this still counts even if Thin Provision (see note below) is not enabled.
Check lvs's Data% column and zfs list's USED/REFER. You might find it to go down when triggering a trim as explained below.
Also see official docs:
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_trim_discard
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_hard_disk_discard
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_thin_provisioning
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_storage_types
If you use ZFS you might also want to enable Thin Provisioning in Datacenter > Storage for your ZFS storage.
.
This will only affect newly created disks. Here's how to apply the setting for already existing disks.
Containers usually cannot call fstrim themselves. You can trigger a one time immediate trim via pct fstrim IDOFCTHERE on the node.
I use a cronjob calling pct fstrim (add via crontab -e).
30 0 * * 0 pct list | awk '/^[0-9]/ {print $1}' | while read ct; do pct fstrim ${ct}; doneYou can also run the command after 30 0 * * 0 manually on the node, of course.
Alternatively you can select discard (8.3.x+) as mount option so this happens immediately.
You do not need to enable this for pct fstrim to work.
Use the mount option when you want it to be immediate/continuous and the pct fstrim cronjob to trigger it on a schedule like it usually works for VMs. I prefer the latter.
You can trigger a one time immediate trim (as root) via fstrim -av from inside a VM.
You can also trigger it from the node side via qm guest exec if the VM has the guest agent enabled and configured
qm list | grep "running" | awk '/[0-9]/ {print $1}' | while read vm; do echo "Trimming ${vm}"; qm guest exec ${vm} -- fstrim -av; doneMost OSs come with a fstrim.timer which, by default, does a weekly fstrim call.
You can check with systemctl status fstrim.timer. If disabled run systemctl enable fstrim.timer.
To edit it to happen more frequently run systemctl edit fstrim.timer and write this.
[Timer]
OnCalendar=dailySome guest operating systems may also require the SSD Emulation flag to be set. If you would like a drive to be presented to the guest as a solid-state drive rather than a rotational hard disk, you can set the SSD emulation option on that drive. There is no requirement that the underlying storage actually be backed by SSDs; this feature can be used with physical media of any type.
For above to work the disk(s) should have the Discard flag set.

If you use the Guest Agent (which you really should) I'd also recommend enabling this under Options > QEMU Guest Agent.

When using thin allocation it can be problematic when a storage reaches 100%. For ZFS you may also want to stay below a certain threshold.
If your storage is already full see this forum post specific to ZFS.
I use a modified version of this snippet to send me a mail if any of my storages reach 75% usage.
# Storage running out of storage. Percentage escaped due to crontab
*/15 * * * * pvesm status 2>&1 | grep -Ev "disabled|error" | tr -d '\%' | awk '$7 >=75 {print $1,$2,$7}' | column -tOr to check a specific type of storage. LVM-Thin in this case
*/15 * * * * pvesm status 2>&1 | grep "lvmthin" | grep -Ev "disabled|error" | tr -d '\%' | awk '$7 >=75 {print $1,$2,$7}' | column -tA similar method can be used to check the file system directly for, in this example, at least 100G~ of free space.
*/15 * * * * df /mnt/backupdirectory | tail -n1 | awk '$4 <=100000000 {print $1,$4,$5}' | column -tIt's generally advised to use the full path to executables in cronjobs (like /usr/sbin/pvesm) as PATH is different.
I use this at the top of mine so I don't have to care about that.
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"Inside the PVE/PBS installer you can use the following shortcuts.
The terminal is particularily useful in case you need a live environment or do some pre-install customizations.
| Shortcut | Info |
|---|---|
CTRL+ALT+F1 |
Installer |
CTRL+ALT+F2 |
Logs |
CTRL+ALT+F3 |
Terminal/Shell |
CTRL+ALT+F4 |
Installer GUI |
If you press E (see below) you can add args that will be persisted into the installed system.
When pressing E during boot/install when the OS/kernel selection shows up you can temporarily edit the kernel arguments. This is useful to debug things or disable passthrough if you run into an issue.
Also see here: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#nomodeset_kernel_param
| Argument | Info |
|---|---|
nomodeset |
Helps with hangs during boot/install. Nvidia often needs that |
debug |
Debugging messages |
fsck.mode=force |
Triggers a file system check |
systemd.mask=pve-guests.service |
Prevents guests from starting up |
When passing through devices it can sometimes happen that your device shares an IOMMU group with something else that's important.
It's also possible that groups shift if you exchange a device. All of this can cause a system to become unbootable.
If editing the boot arguments doesn't help, the simplest fix is to go into the UEFI/BIOS and disable every virtualization related thing. VT-x/VT-d/SVM/ACS/IOMMU or whatever it's called for you.
For checking IOMMU groups I like this script: https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#Ensuring_that_the_groups_are_valid.
For your convenience
#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
echo "IOMMU Group ${g##*/}:"
for d in $g/devices/*; do
echo -e "\t$(lspci -nns ${d##*/})"
done;
done;To check the hostpci settings of existing VMs (to find which ones use passthrough) you can do this
grep -sR "hostpci" /etc/pveSimple one liner
lspci -vv | grep -P "\d:\d.*|IOMMU"If you want to use PVE tooling to check the IOMMU groups you can use this
pvesh get /nodes/$(hostname)/hardware/pci --pci-class-blacklist ""To check the IOMMU groups in the GUI you can use the Hardware tab of the VM when adding a PCI(e) device.

Or you can check in Datacenter > Resource Mappings which I think is easier to read because of its tree structure.
It also warns about IOMMU groups.

pct rescan and qm rescan can be useful to find missing volumes and add them to their respective VM/CT.
You can find them as unused disks in Hardware/Resources.
KSM and ballooning both start when the host reaches 80% memory usage by default.
Ballooning was hardcoded before version 8.4 but it is now configurable via node > System > Options > RAM usage target for ballooning.
To make KSM start sooner and give it a chance to "free" some memory before ballooning starts you can modify /etc/ksmtuned.conf.
For example to let it start at 70% you can configure it like this
KSM_THRES_COEF=30You can also make it more "aggressive" with something like this
KSM_NPAGES_MAX=5000Also see official docs:
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#ballooning-target
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_memory
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#kernel_samepage_merging
- https://pve.proxmox.com/wiki/Dynamic_Memory_Management
- https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM)
This allows you to use xterm.js (used for CTs by default) which allows copy & pasting. Tested for debian/ubuntu.
All commands are to be run inside the VM and this might also work for other OSs. Please let me know if it does.
Go to the Hardware tab of your VM and add a Serial Port.

Some distributions are already set up for this or can be configured via their own UI and this step can be skipped for them.
For example Home Assistant's HAOS is already set up for this and TrueNAS can be configured for it via UI.
Either one of these commands can help finding the right tty.
dmesg -T | grep "tty"
journalctl -b0 -kg "tty"For example it's ttyS0 for me
Aug 18 02:17:16 nodename kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
To enable the TTY edit /etc/default/grub via
nano /etc/default/grubFind the line starting with GRUB_CMDLINE_LINUX_DEFAULT and add console=ttyS0 console=tty0 at the end (replace ttyS0 with yours from above).
It can look like this for example
GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0 console=tty0"Save via CTRL+X and exit. Afterwards run
update-grubSee here for more:
- https://0pointer.de/blog/projects/serial-console.html
- https://docs.kernel.org/admin-guide/serial-console.html
- https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html
Reboot the VM via the PVE button or power it off and on again to apply the Hardware and bootloader config change.
This is so the VM is cold booted. A normal reboot command from within the VM will not do the same.
You can see if a Hardware change was applied by the color. If it's orange it's still to be applied.
Once that's done your VM should have a functioning xterm.js button under the Console one. Click the arrow beside it.

You don't have to use the CLI via qm disk import, you can also use the GUI to import disk images or whole machines.
This assumes you use the local storage. Replace with whatever Directory storage you want to use.
- Go to
Datacenter > Storageand modifylocalto have theImportcontent type.

- Go to
local > Importand use the buttons at the top to upload/download/import your OVA/QCOW2/RAW/VMDK/IMG.

- Select your file and click the
Importbutton at the top.
When creating a VM you can delete the existing disk and select Import to use it

When importing a machine I recommend to change the following settings. At least for linux guests.
OS Type > LinuxAdvanced > Disks > SCSI Controller > VirtIO SCSI single.Advanced > Network Interfaces > Model > VirtIO (paravirtualized)
A NIC's (Network Interface Controller/Card) name is hardware dependent and can change when you add or remove PCI(e) devices. Sometimes major kernel upgrades can also cause this.
Since the /etc/network/interfaces file which handles networking uses these names to configure your network, changes to the name will break it.
To prevent those changes you can use a systemd .link file to permanently override the name.
PVE 9 comes with the pve-network-interface-pinning pinning tool.
You can show your NICs and assigned ips with ip a/ip l.
If you have a PCI(e) NIC you can use this to show the device(s) and their modules/drivers.
lspci -vnnk | awk '/Ethernet/{print $0}' RS= | grep -Pi --color "^|(?<=Kernel driver in use: |Kernel modules: )[^ ]+"If you have a USB NIC you can use
lsusb -vt | grep -Pi --color "^|(?<=Driver=)[^,]+"Just skip the | grep ... with the poor regexes if you don't need to color the output.
This shows the driver used for each NIC. This is useful because it shows the actual name like eno1.
# ls -l /sys/class/net/*/device/driver
lrwxrwxrwx 1 root root 0 May 15 12:58 /sys/class/net/enp6s0/device/driver -> ../../../../../../bus/pci/drivers/igb
lrwxrwxrwx 1 root root 0 May 15 12:58 /sys/class/net/enp7s0/device/driver -> ../../../../../../bus/pci/drivers/igb
lrwxrwxrwx 1 root root 0 May 15 12:58 /sys/class/net/enx00e04c680085/device/driver -> ../../../../../../../bus/usb/drivers/r8152This shows the device path it belongs to.
Note the values before and after the ->. In this example 06:00.0 and and 07:00.0. enx00e889680195 is a USB device.
You can then cross-reference them with the first column of the lspci | grep -i "Ethernet" or lsusb -vt output
# ls -l /sys/class/net/*/device
lrwxrwxrwx 1 root root 0 Jun 24 12:32 /sys/class/net/enp6s0/device -> ../../../0000:06:00.0
lrwxrwxrwx 1 root root 0 Jun 24 12:32 /sys/class/net/enp7s0/device -> ../../../0000:07:00.0
lrwxrwxrwx 1 root root 0 Jun 24 12:32 /sys/class/net/enx00e889680195/device -> ../../../4-1:1.0To temporarily use DHCP you can use this
# PVE 8 / Debian 12
ifdown vmbr0; dhclient -v
# When done testing
dhclient -r; ifup vmbr0
# PVE 9 / Debian 13
ifdown vmbr0; dhcpcd -d
# When done testing
dhcpcd -k; ifup vmbr0Optionally pass the NIC name as argument to dhclient/dhcpcd to test a specific one.
This is useful to check general router connectivity or what the subnet/gateway is.
It also allows you to check if your DHCP reservation is properly set up.
To see which port a network cable is plugged into you can unplug it, run dmesg -Tw to follow the kernel logs and then plug it in again.
Use CTRL+C to stop following the kernel log.
The classic to make the LED blink
# NIC from "DHCPREQUEST for x.x.x.x on NIC_NAME_HERE to x.x.x.yx port 67"
ethtool --identify NIC_NAME_HERENot really helpful if you have no network though as ethtool is not pre-installed.
There's multiple ways (GUI or CLI) and multiple files to edit.
You need to edit these files
/etc/network/interfaces(node > System > Networkin the GUI)/etc/hosts(node > System > Hostsin the GUI)/etc/resolv.conf(node > System > DNSin the GUI)/etc/issue(What you see when loggin in. Just informational but still a good idea to update it)/etc/pve/corosync.conf(When in a cluster.config_versionneeds to be incremented when you change things).
I recommend doing grep -sR "old.ip.here" /etc to check if you missed something.
Calling pvebanner, restarting the pvebanner service or rebooting should update the /etc/issue as well. Do this last.
To "reload" /etc/network/interfaces and apply the new ip you can do something like ifreload -av or simply reboot.
ifupdown2 keeps old interfaces files in /var/log/ifupdown2/. You can find them like this
find /var/log/ifupdown2/ -name "interfaces"This will likely never be a complete tutorial, just some often shared commands and tips and scripts.
Consult the following sources for instructions and use mapped devices rather than raw ones.
- https://pve.proxmox.com/wiki/PCI(e)_Passthrough
- https://pve.proxmox.com/wiki/PCI_Passthrough
- https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF
Make sure to to check the IOMMU groups before passing a device to a VM. See above.
Make sure you can see the device and that it uses the expected driver. I.e nvidia, amdgpu, i915, etc.
lspci -vnnk | awk '/VGA/{print $0}' RS= | grep -Pi --color "^|(?<=Kernel driver in use: |Kernel modules: )[^ ]+"If nvidia devices are not available when the system boots you can work around it by adding this to your crontab
@reboot /usr/bin/nvidia-smi > /dev/nullCreate and start your CT before continuing.
For NVIDIA you can use the Nvidia Container Toolkit way.
The benefits of that are that you do not have to install drivers inside the CT, don't have to add devices or check groups, etc.
This also helps with changing render device names (multi GPU) and so on. You will also not have the problem of different driver versions conflicts on upgrades.
It's very simple and convenient and my recommended way to do this for NVIDIA GPUs.
Install the NVIDIA drivers and the Nvidia Container Toolkit on the node.
Set a variable with a list of your CT IDs you want to configure. pct list shows them. In this example it's CT 400 and 55.
CTIDS=(400 55)Then simply copy and paste this into the node's CLI. This will prepend the needed lines into the CT's config file and reboot it.
for ct in $(pct list | awk '/^[0-9]/ {print $1}'); do
if [[ ! "${CTIDS[@]}" =~ "$ct" ]]; then
continue
fi
echo "# $ct"
if grep -q "/usr/share/lxc/hooks/nvidia" "/etc/pve/lxc/${ct}.conf"; then
echo "Already configured"
else
{
echo "lxc.hook.pre-start: sh -c '[ ! -f /dev/nvidia0 ] && /usr/bin/nvidia-modprobe -c0 -u'"
echo "lxc.environment: NVIDIA_VISIBLE_DEVICES=all"
echo "lxc.environment: NVIDIA_DRIVER_CAPABILITIES=all"
echo "lxc.hook.mount: /usr/share/lxc/hooks/nvidia"
cat /etc/pve/lxc/${ct}.conf
} > /etc/pve/lxc/${ct}.conf.new && mv /etc/pve/lxc/${ct}.conf.new /etc/pve/lxc/${ct}.conf
echo "Configured"
echo "pct reboot $ct"
pct reboot "$ct"
fi
doneIf everything was done correctly, running nvidia-smi inside the CT should work.
Check the video and render group ids inside the CT (from the node side). This is important later.
The default ones below should work for debian.
First we define which CTIDs we want to work with
# CT IDs to check the groups for
CTIDS=(5555 2222 55)Then we check the video and render groups of the CTs with those IDs
for id in ${CTIDS[@]}; do
echo "# $id"
pct exec $id getent group video render | awk -F: '{print $1,$3}'
echo ""
doneThis procedure simply calls pct set IDOFCTHERE --devX /givenpath for all the given paths and reboots the CT.
It handles the optional gids (for the video and render groups) when given.
Modify it to add more devices and change the gids. Invalid paths and CTs will be skipped so there's no need to remove anything you don't have.
First we define which CTIDs we want to work with and which devices to pass to them
# CT IDs to add the devices to
CTIDS=(5555 2222 55)Also see Check which PCI(e) device a drm device belongs to.
# Devices to add to the CT(s)
DEVICES=(
"/dev/dri/renderD128,gid=104"
"/dev/dri/renderD129,gid=104"
"/dev/dri/renderD130,gid=104"
"/dev/dri/renderD131,gid=104"
"/dev/dri/card0,gid=44"
"/dev/dri/card1,gid=44"
"/dev/dri/card2,gid=44"
"/dev/dri/card3,gid=44"
"/dev/kfd,gid=104"
"/dev/nvidia0"
"/dev/nvidia1"
"/dev/nvidia2"
"/dev/nvidia3"
"/dev/nvidiactl"
"/dev/nvidia-uvm"
"/invalid"
"/dev/nvidia-uvm-tools"
)Verify and show the group and user IDs for the devices on the node. The IDs/GIDs should match with the CT side above. If not modify them.
Note: You can run this inside the CT too.
function showDeviceInfo() {
echo "user userName group groupName device"
for device in "${DEVICES[@]}"; do
trimmedDevice=${device%%,*}
if [ -e "$trimmedDevice" ]; then
echo "$(stat -c '%u %U %g %G %n' "$trimmedDevice") $device"
fi
done
}
showDeviceInfo | column -tRun the rest of the script
for ct in $(pct list | awk '/^[0-9]/ {print $1}'); do
if [[ ! "${CTIDS[@]}" =~ "$ct" ]]; then
continue
fi
echo "# $ct"
index=0
for device in "${DEVICES[@]}"; do
trimmedDevice=${device%%,*}
if [ -e "$trimmedDevice" ]; then
echo "pct set $ct --dev${index} $device"
pct set "$ct" --dev${index} "$device"
((index++))
fi
done
echo "pct reboot $ct"
pct reboot "$ct"
doneSome of these packages can be needed for the intel drivers/modules/tools to work properly inside a CT. For example with jellyfin/frigate.
This can be a little bit finicky so I stole part of the list from the helper script project.
apt install -y va-driver-all ocl-icd-libopencl1 intel-opencl-icd vainfo intel-gpu-tools nvtopValidate with vainfo, intel_gpu_top and nvtop.
This is my current recommendation for PVE 9 / Debian 13. If you have to use PVE 8 or Debian 12 see older version of this guide.
It's a simpler method as it uses packages straight from the debian repos. They might be a bit older but this should be fine and it makes installation simpler.
This guide is a bit more opinionated. For example it "forces" you to use the DEB822 format and provides no alternative. Please read the comments for additional hints and options.
Most guides use nvidia's .run files but then you have to update the drivers manually. Instead you can use the drivers/libs from the debian apt repository and update them like any other package.
Note that this has the disadvantage that you, at least by default unless you pin versions, have less control over updates and thus might need to reboot more often. For example when the version of the running driver doesnt match the libraries and tools any more.
These instructions are based on the official debian instructions
I modified them for easy copy pasting. These commands should work for nodes, VMs and CTs as long as they are based on debian. If you use ubuntu please use their docs.
This assumes you use the root user. These command are to be run on the node/VM/CT. Copy & paste.
We need the non-free component. You should be able to run this to add the component to your /etc/apt/sources.list.d/debian.sources file and update the lists
# Rewrites apt *.list files to *.sources in DEB822 format
apt modernize-sources
# Optional to delete the backup files of the modernize tool above
find /etc/apt/sources.list.d/ -type f -name "*.bak" -delete
# Rewrites the "Components:" line to add non-free and non-free-firmware
sed -i 's/^Components: .*/Components: main contrib non-free non-free-firmware/' /etc/apt/sources.list.d/debian.sources
# Updates the lists
apt updateIf your node/VM uses Secure Boot (check with mokutil --sb-state) follow this section.
Make sure to monitor the next boot process via noVNC. You will be asked for the password when importing the key.
apt install dkms && dkms generate_mok
dpkg -s proxmox-ve 2>&1 > /dev/null && apt install -s pve-headers || apt install -s linux-headers-generic
# Set a simple password (a-z keys)
mokutil --import /var/lib/dkms/mok.pub
# If you followed this section after you already installed the driver run this and reboot
# dpkg-reconfigure nvidia-kernel-dkmsapt install nvidia-detect
# Will likely recommend "nvidia-driver"
nvidia-detect
# "nvidia-smi" and "nvtop" are optional but recommended
apt install nvidia-driver nvidia-smi nvtopHere we just need the libraries so nvidia-driver is replaced with nvidia-driver-libs.
# "nvidia-smi" and "nvtop" are optional but recommended
apt install nvidia-driver-libs nvidia-smi nvtopNow see if nvidia-smi works. A reboot might be necessary for the node or a VM.
This can help save power and decrease access delays. See docs.
These commands are to be run on the node or VM. Copy & paste.
Enable and start it with
systemctl enable --now nvidia-persistenced.serviceYou can see the status in nvidia-smi.

This alternative to the apt installation method gives you more control over the version but you have to update yourself.
These commands should work for both the nodes, VMs and CTs as long as they are based on debian/ubuntu.
This assumes you use the root user. These command are to be run on the node/VM/CT. Copy & paste.
For datacenter (Some links are broken but you can google for the version)
- https://developer.nvidia.com/datacenter-driver-archive
- https://docs.nvidia.com/datacenter/tesla/index.html
For linux/unix
- https://www.nvidia.com/en-us/drivers/unix/linux-amd64-display-archive/
- https://www.nvidia.com/en-us/drivers/unix/
<TAB> here means pressing the TAB key to auto complete the file name.
wget LINKFROMABOVEHERE
chmod +x NVIDIA*.run
./NVIDIA<TAB> --no-kernel-moduleswget LINKFROMABOVEHERE
apt install -y linux-headers-generic gcc make dkms
chmod +x NVIDIA*.run
./NVIDIA<TAB> --dkmswget LINKFROMABOVEHERE
apt install -y pve-headers gcc make dkms
chmod +x NVIDIA*.run
./NVIDIA<TAB> --dkmsThese command are to be run inside a CT or on the node. Copy & paste.
Install this on the node if you want to give a NVIDIA GPU to a CT and install it in the CT if you want to give a passed through GPU to a docker container.
Adapted from the official guide.
apt update && apt install -y gpg curl --no-install-recommends
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor > /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
echo 'deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/stable/deb/$(ARCH) /' | tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
apt update && apt install -y nvidia-container-toolkit
systemctl status docker.service >/dev/null 2>&1 && nvidia-ctk runtime configure --runtime=docker
# This is needed for LXC or you might get an error like
# nvidia-container-cli: mount error: failed to add device rules: unable to find any existing
# device filters attached to the cgroup: # bpf_prog_query(BPF_CGROUP_DEVICE) failed: operation
# not permitted: unknown
if [[ $(systemd-detect-virt) == "lxc" ]]; then
nvidia-ctk config -i --set nvidia-container-cli.no-cgroups=true
fi
systemctl status docker.service >/dev/null 2>&1 && systemctl restart docker.serviceIf you installed this to run docker containers you can verify if it worked like this
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smiThis sorts by compression ratio
zfs list -ospace,logicalused,compression,compressratio -rS compressratioThis sorts by used size
zfs list -ospace,logicalused,compression,compressratio -rS usedIf above shows USEDSNAP being very high and you already deleted snapshots or have none it might be from a old/broken migration task.
It might make sense to add a | less at the end if you have lots of snapshots.
zfs list -ospace,logicalused,compression,compressratio,creation -rs creation -t snapSince CTs use datasets this is very trivial and should be reasonably safe but make sure to take backups.
First grab some information about the CT (ID 120 in this example) you want to modify
# zfs list -ospace,logicalused,refquota | grep -E "NAME|120"
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD LUSED REFQUOTA
nvmezfs/subvol-120-disk-0 7.02G 23.0G 160K 23.0G 0B 0B 28.3G 30GTake note of USED and then simply set the refquota to what you want. Don't set the quota too low or lower than USED.
zfs set refquota=29G nvmezfs/subvol-120-disk-0Lastly run a pct rescan
# pct rescan
rescan volumes...
CT 120: updated volume size of 'nvmezfs:subvol-120-disk-0' in config.This works for growing it too, but the GUI already provides that option.
Adapted from the official documentation
PVE uses 10% of the host's memory by default but it only configures the system like that if the OS was installed on ZFS.
If you configure a ZFS storage after installation the defaults of 50% will be used which you probably don't want.
This will change soon: https://bugzilla.proxmox.com/show_bug.cgi?id=6285.
PVE 9 / ZFS 2.3.x removes the 50% limit on linux.
Check the current ARC size with
arc_summary -s arc
# Also helpful
arcstat
# To check hit ratios
arc_summary -s architsCheck the config file (which might not exist) with
cat /etc/modprobe.d/zfs.confThe code below will try to not replace your file but only update it.
To calculate a percentage of your total memory in G you can use this
PERCENTAGE=10
grep MemTotal /proc/meminfo | awk -v percentage=$PERCENTAGE '{print int(($2 / 1024^2) / 100 * percentage)}'Set the size in G to adapt to with this
ARC_SIZE_G=32Then let the code below do the rest
MEMTOTAL_BYTES="$(($(awk '/MemTotal/ {print $2}' /proc/meminfo) * 1024))"
ARC_SIZE_BYTES_MIN="$(( MEMTOTAL_BYTES / 32 ))"
ARC_SIZE_BYTES_MAX=$(( ARC_SIZE_G * 1024*1024*1024 ))
if [ "$ARC_SIZE_BYTES_MAX" -lt "$ARC_SIZE_BYTES_MIN" ]; then
echo "Error: Given ARC Size of ${ARC_SIZE_BYTES_MAX} is lower than the current default minimum of ${ARC_SIZE_BYTES_MIN}. Please increase it."
exit 1
elif [ "$ARC_SIZE_BYTES_MAX" -gt "$MEMTOTAL_BYTES" ]; then
echo "Error: Given ARC Size of ${ARC_SIZE_BYTES_MAX} is greater than the total memory of ${MEMTOTAL_BYTES}. Please decrease it."
exit 1
fi
echo "$ARC_SIZE_BYTES_MAX" > /sys/module/zfs/parameters/zfs_arc_max
if grep -q "options zfs zfs_arc_max" "/etc/modprobe.d/zfs.conf" 2> /dev/null; then
sed -ri "s/.*options zfs zfs_arc_max.*/options zfs zfs_arc_max=$ARC_SIZE_BYTES_MAX # ${ARC_SIZE_G}G/gm" /etc/modprobe.d/zfs.conf
else
echo -e "options zfs zfs_arc_max=$ARC_SIZE_BYTES_MAX # ${ARC_SIZE_G}G" >> /etc/modprobe.d/zfs.conf
fiCheck the config and ARC again to see if everything looks alright, then finally update the initramfs. This is needed so the settings are persisted.
# -k all might not be needed and omitting it speeds up the process
update-initramfs -u -k allThere is no reboot necessary.
Just some miscellaneous small tips and scripts which don't have a good place yet or are better to be linked from above to keep things structured and organized.
If goes without saying that you should be careful here. I trust you have backups.
First rescan
qm rescan
pct rescanNow find unused disks in the configs
# grep -sR "^unused[0-9]+: " /etc/pve/
/etc/pve/nodes/pve/qemu-server/500.conf:unused0: nvmezfs:vm-500-disk-1Investigate their source
# pvesm path nvmezfs:vm-500-disk-1
/dev/zvol/nvmezfs/vm-500-disk-1Show all of their paths
grep -sR "^unused[0-9]+: " /etc/pve/ | awk -F': ' '{print $2}' | xargs -I{} pvesm path {}Then delete if needed
# qm set 500 --delete unused0Here's a little script to do all of this for you. It only tells you the commands, not run them.
find /etc/pve/ -name '[0-9]*.conf' | while read -r config; do
[[ "$config" == *"/lxc/"* ]] && CMD="pct" || CMD="qm"
guest=$(basename "$config" .conf)
unused_lines=$(grep -E '^unused[0-9]+: ' "$config") || continue
echo "$unused_lines" | while read -r line; do
echo "# $line"
disk=$(echo "$line" | awk -F':' '{print $1}')
echo -e "$CMD set $guest --delete $disk\n"
done
doneA script that can extract the .conf file out of pmxcfs's config.db.
Only lightly tested and written without a lot of checks so be careful. Make a backup of the file and install sqlite3 with apt install sqlite3.
#!/usr/bin/env bash
# Attempts to restore .conf files from a PMXCFS config.db file.
set -euo pipefail
# Usually at /var/lib/pve-cluster/config.db
# You can do "cd /var/lib/pve-cluster/" and leave CONFIG_FILE as is
CONFIG_FILE="config.db"
# Using these paths can be convenient but dangerous!
# /etc/pve/nodes/$(hostname)/qemu-server/
VM_RESTORE_PATH="vms"
# /etc/pve/nodes/$(hostname)/lxc/
CT_RESTORE_PATH="cts"
[ -d "$VM_RESTORE_PATH" ] || mkdir "$VM_RESTORE_PATH"
[ -d "$CT_RESTORE_PATH" ] || mkdir "$CT_RESTORE_PATH"
GUESTIDS=$(sqlite3 $CONFIG_FILE "select name from tree where name like '%.conf' and name != 'corosync.conf';")
for guest in $GUESTIDS; do
sqlite3 $CONFIG_FILE "select data from tree where name like '$guest';" >"$guest"
if grep -q "rootfs" "$guest"; then
mv "$guest" "$CT_RESTORE_PATH"
echo "Restored CT config $guest to $VM_RESTORE_PATH/$guest"
else
mv "$guest" "$VM_RESTORE_PATH"
echo "Restored VM config $guest to $CT_RESTORE_PATH/$guest"
fi
doneYou can monitor all your disks' SMART info like this. This creates a nice "table" and highlights changes.
Temperature
watch -x -c -d -n1 bash -c 'for i in /dev/{nvme[0-9]n1,sd[a-z]}; do echo -e "\n[$i]"; smartctl -a $i | grep -Ei "Device Model|Model Number|Serial|temperature"; done'Errors
watch -x -c -d -n1 bash -c 'for i in /dev/{nvme[0-9]n1,sd[a-z]}; do echo -e "\n[$i]"; smartctl -a $i | grep -Ei "Device Model|Model Number|Serial|error"; done'
Temperature and writes
watch -x -c -d -n1 bash -c 'for i in /dev/{nvme[0-9]n1,sd[a-z]}; do echo -e "\n[$i]"; smartctl -a $i | grep -Ei "Device Model|Model Number|Serial|temperature|writ"; done'and so on.
PVE keeps credentials like CIFS passwords in /etc/pve/priv/storage.
apt install smem --no-install-suggests --no-install-recommends
# -a, --autosize size columns to fit terminal size
# -t, --totals show totals
# -k, --abbreviate show unit suffixes
# -r, --reverse reverse sort
# -s SORT, --sort=SORT field to sort on
watch -n1 'smem -atkr -s swap'If you have multiple GPUs you will likely have multiple /dev/dri/card* and /dev/dri/renderD* devices.
Note the values before and after the ->. In this example 01:00.0, 05:00.0 and 09:00.0
# ls -l /sys/class/drm/*/device
lrwxrwxrwx 1 root root 0 May 17 07:54 /sys/class/drm/card0/device -> ../../../0000:05:00.0
lrwxrwxrwx 1 root root 0 May 17 07:54 /sys/class/drm/card1/device -> ../../../0000:09:00.0
lrwxrwxrwx 1 root root 0 May 17 07:54 /sys/class/drm/card2/device -> ../../../0000:01:00.0
lrwxrwxrwx 1 root root 0 May 17 07:54 /sys/class/drm/renderD128/device -> ../../../0000:09:00.0
lrwxrwxrwx 1 root root 0 May 17 07:54 /sys/class/drm/renderD129/device -> ../../../0000:01:00.0You can then cross-reference them with the first column of lspci | grep -i "VGA"
# lspci | grep -i "VGA"
01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1)
05:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
09:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series] (rev c8)Device paths such as /dev/dri/renderD128 and /dev/dri/card0 can change their name across boots similar to /dev/sdX for disks.
We can use udev rules to create a symlink that will refer to the right device. Also see the Arch Wiki article about UDEV.
Check which PCIe device a DRM device belongs to first.
Also see Check device and drivers to get the vendor and device ids.
Create a file in /erc/udev/rules.d/ via nano /etc/udev/rules.d/99-gpu-render.rules and put this in it
# Render
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/renderD[0-9]*", KERNEL=="renderD[0-9]*" \
SYMLINK+="dri/render-$attr{vendor}_$attr{device}-$attr{subsystem_vendor}_$attr{subsystem_device}-$driver_$env{ID_PATH_TAG}"
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/renderD[0-9]*", KERNEL=="renderD[0-9]*" \
SYMLINK+="dri/render-$attr{vendor}_$attr{device}-$attr{subsystem_vendor}_$attr{subsystem_device}-$driver"
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/renderD[0-9]*", KERNEL=="renderD[0-9]*" \
SYMLINK+="dri/render-$driver_$env{ID_PATH_TAG}"
# Video/Card
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/card[0-9]*", KERNEL=="card[0-9]*" \
SYMLINK+="dri/card-$attr{vendor}_$attr{device}-$attr{subsystem_vendor}_$attr{subsystem_device}-$driver_$env{ID_PATH_TAG}", GROUP="video"
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/card[0-9]*", KERNEL=="card[0-9]*" \
SYMLINK+="dri/card-$attr{vendor}_$attr{device}-$attr{subsystem_vendor}_$attr{subsystem_device}-$driver", GROUP="video"
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/card[0-9]*", KERNEL=="card[0-9]*" \
SYMLINK+="dri/card-$driver_$env{ID_PATH_TAG}", GROUP="video"Then reload and trigger udev
udevadm control --reload-rules && udevadm trigger --subsystem-match=drmNow check ls -l /dev/dri/. This rules file should have dynamically created links like this in /dev/dri/
render-nvidia_pci-0000_01_00_0
render-0x10de_0x2204-0x1043_0x87b3-nvidia
render-0x10de_0x2204-0x1043_0x87b3-nvidia_pci-0000_01_00_0
card-nvidia_pci-0000_01_00_0
card-0x10de_0x2204-0x1043_0x87b3-nvidia_pci
card-0x10de_0x2204-0x1043_0x87b3-nvidia_pci-0000_01_00_0This allows you to easily and reliably refer to a specific GPU's device to pass to a CT.
Note that these links change if the PCI ID does too. The ID is needed to uniquely refer to a device so it's part of the link name.
For NVIDIA I'd generally recommend the NVIDIA toolkit for which this is mostly useless but if you know of a simple way to achieve this for NVIDIA's card0 devices let me know.
This works for other thing such as USB devices too. In this example I will work with these two GPUs. Take note of the first column.
# lspci -nnk | grep -i "VGA"
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102 [GeForce RTX 3090] [10de:2204] (rev a1)
09:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series] [1002:1638] (rev c8)I have these devices
# ls -l /sys/class/drm/*/device
lrwxrwxrwx 1 root root 0 Sep 25 22:16 /sys/class/drm/card0/device -> ../../../0000:05:00.0
lrwxrwxrwx 1 root root 0 Sep 25 22:16 /sys/class/drm/card0-VGA-1/device -> ../../card0
lrwxrwxrwx 1 root root 0 Sep 25 22:16 /sys/class/drm/card1/device -> ../../../0000:01:00.0
lrwxrwxrwx 1 root root 0 Sep 25 22:16 /sys/class/drm/card2/device -> ../../../0000:09:00.0
lrwxrwxrwx 1 root root 0 Sep 25 22:16 /sys/class/drm/renderD128/device -> ../../../0000:01:00.0
lrwxrwxrwx 1 root root 0 Sep 25 22:16 /sys/class/drm/renderD129/device -> ../../../0000:09:00.0As you can see renderD129 points to my iGPU (09:00.0) and it's what I use in this example.
Check for uniqe attibutes to target the device
# udevadm info --attribute-walk --name=/dev/dri/renderD129 | grep -E "SUBSYSTEM|KERNEL|{device}|{vendor}"
KERNEL=="renderD129"
SUBSYSTEM=="drm"
KERNELS=="0000:09:00.0"
SUBSYSTEMS=="pci"
ATTRS{device}=="0x1638"
ATTRS{vendor}=="0x1002"
KERNELS=="0000:00:08.1"
SUBSYSTEMS=="pci"
ATTRS{device}=="0x1635"
ATTRS{vendor}=="0x1022"
KERNELS=="pci0000:00"
SUBSYSTEMS==""Here KERNELS=="0000:09:00.0", ATTRS{device}=="0x1635" and ATTRS{vendor}=="0x1022" match with my iGPU so I'll use that.
Create a file in /erc/udev/rules.d/ via nano /etc/udev/rules.d/99-gpu-render.rules.
Mine looks like this for both GPUs' devices
# iGPU
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/renderD[0-9]*", KERNEL=="renderD[0-9]*", ATTRS{vendor}=="0x1002", ATTRS{device}=="0x1638", \
SYMLINK+="dri/render-igpu", GROUP="render"
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/card[0-9]*", KERNEL=="card[0-9]*", ATTRS{vendor}=="0x1002", ATTRS{device}=="0x1638", \
SYMLINK+="dri/card-igpu", GROUP="video"
# dGPU
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/renderD[0-9]*", KERNEL=="renderD[0-9]*", ATTRS{vendor}=="0x10de", ATTRS{device}=="0x2204", \
SYMLINK+="dri/render-dgpu", GROUP="render"
SUBSYSTEM=="drm", ENV{DEVNAME}=="/dev/dri/card[0-9]*", KERNEL=="card[0-9]*", ATTRS{vendor}=="0x10de", ATTRS{device}=="0x2204", \
SYMLINK+="dri/card-dgpu", GROUP="video"Feel free to use more fitting names.
Finally reload and trigger udev
udevadm control --reload-rules && udevadm trigger --subsystem-match=drmSee if the symlinks appear via ls -l /dev/dri/ and point to the right devices.
This works the same way for other such devices.
This is useful if you want to know to which controller a disk is connected to.
Note the values before and after the ->. In this example 02:00.1 and 08:00.0
# ls -l /dev/disk/by-path/
lrwxrwxrwx 1 root root 9 Jul 1 18:05 pci-0000:02:00.1-ata-2 -> ../../sda
lrwxrwxrwx 1 root root 13 Jul 1 18:05 pci-0000:08:00.0-nvme-1 -> ../../nvme0n1You can then cross-reference them with the first column of lspci
# lspci
02:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset SATA Controller
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Renoir PCIe Dummy Host BridgeNote that you can't necessarily rely on the name to always refer to the same device.
This section is about how to check what process and disk causes wait (IO Delay), how fast it reads/writes and so on.
Also see these articles:
- https://www.site24x7.com/learn/linux/troubleshoot-high-io-wait.html
- https://linuxblog.io/what-is-iowait-and-linux-performance/
- https://serverfault.com/questions/367431/what-creates-cpu-i-o-wait-but-no-disk-operations
Install the dependencies first.
apt install -y sysstat iotop-c fatraceIO delay or IO Wait is shown in the PVE Summary and good ol' top can also be used to check the IO wait via its wa in the CPU column.
iotop-c can show per process statistics. For it to properly work (see why below) you should add the delayacct kernel arg and reboot.
Alternatively, use
sysctl kernel.task_delayacctto switch the state at runtime.
Note however that only tasks started after enabling it will have delayacct information.
https://docs.kernel.org/accounting/delay-accounting.html#usage
Run this and check the column (select it via arrow keys) you're interested in.
# -c, --fullcmdline show full command line
# -P, --processes only show processes, not all threads
# -a, --accumulated show accumulated I/O instead of bandwidt
iotop-c -cPAlso try iotop-c -cPa or press a to toggle cumulative mode and let it run for a while.

iostat can show per device statistics. Run this and check the %util for the disk(s).
# -x Display extended statistics.
# -y Omit first report with statistics since system boot.
# -z Omit output for devices for which there was no activity during the sample period
# -t Print the time for each report displayed.
# -s Display a short (narrow) version of the report up to 80 characters.
# --compact Don't break the Device Utilization Report into sub-reports.
# --human Print sizes in human readable format (e.g. 1.0k, 1.2M, etc.).
iostat -xyzts --compact --human 1fatrace can be used to check file events such as read, write, create and so on. It can help you identify which processes are modifying files and when. Here's an example to listen for file writes
# -f TYPES, --filter=TYPES Show only the given event types; C, R, O, or W, e. g. --filter=OC
fatrace -f W# -y Normally the first line of output reports the statistics since boot: suppress it.
# -l Include average latency statistics:
watch -cd -n1 "zpool iostat -yl 1 1"# -q Include active queue statistics.
watch -cd -n1 "zpool iostat -yq 1 1"# -r Print request size histograms for the leaf vdev's I/O
watch -cd -n1 "zpool iostat -yr 1 1"With PVE 9 / Debian 13 the file suffix can now also be .sources so don't get confused by that.
Also see official docs:
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_no_subscription_repo
- https://pve.proxmox.com/pve-docs/pve-admin-guide.html#repos_secure_apt
Go to node > Updates > Repositories and add the no-subscription repo.

Disable the enterprise repos
At the end it should look like this.

Go to node > Updates > Refresh and see if everything works as expected.
Here's an example /etc/apt/sources.list file
deb http://ftp.debian.org/debian bookworm main contrib
deb http://ftp.debian.org/debian bookworm-updates main contrib
# security updates
deb http://security.debian.org/debian-security bookworm-security main contrib
# Proxmox VE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscriptionTo keep the default one and add just the proxmox repo in its own file you can do this
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.listYou should disable the default enterprise repos at this point by commenting out the lines
sed -i '/^#/!s/^/#/' /etc/apt/sources.list.d/pve-enterprise.listNow check with apt update for errors.
You can find an example /etc/apt/sources.list.d/proxmox.sources file here.
It looks like this
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpgTo create this file you can use this command
cat > /etc/apt/sources.list.d/proxmox.sources << EOF
Types: deb
URIs: http://download.proxmox.com/debian/pve
Suites: trixie
Components: pve-no-subscription
Signed-By: /usr/share/keyrings/proxmox-archive-keyring.gpg
EOFYou might have to download the key it references via
wget https://enterprise.proxmox.com/debian/proxmox-archive-keyring-trixie.gpg -O /usr/share/keyrings/proxmox-archive-keyring.gpgYou should disable the default enterprise repos at this point by commenting out the lines (or appending Enabled: no)
sed -i '/^#/!s/^/#/' /etc/apt/sources.list.d/pve-enterprise.sourcesNow check with apt update for errors.
Do you have strange characters in your CLI tools rather than unicode symbols? The default C locale might be the cause.
This is mostly useful for CTs. For VMs you generally set this up during install.
To interactive change it you can use
dpkg-reconfigure localesTo non-interactively change it you can use something like this
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
echo 'LANG=en_US.UTF-8' > /etc/locale.conf
ln -sf /etc/locale.conf /etc/default/locale
source /etc/locale.conf
locale-genVerify with these
locale
localectlPVE is able to send you notifications about updates which look something like this
The following updates are available:
Package Name Installed Version Available Version
libxslt1.1 1.1.35-1.2+deb13u1 1.1.35-1.2+deb13u2
xsltproc 1.1.35-1.2+deb13u1 1.1.35-1.2+deb13u2 To enable them run this
pvesh set /cluster/options --notify package-updates=alwaysI also like to install apticron which gives a lot more details
apt install apticronFile based disks (stored on Directory type storages) such as .qcow2, .raw and so on can have some issues.
PVE does not enable the Content Types of the local storage to store such files by default.
- They can be slow and inefficient.
- CTs only support
.rawfiles which provide no snapshot ability. - Thin provisioning doesn't necessarily work
- Uses the same storage as the OS/system
- No replication possible

















I think in https://gist.github.com/Impact123/3dbd7e0ddaf47c5539708a9cbcaab9e3#checking-zfs-queue-stats you meant for the command to be
watch -n1 "zpool iostat -yq", notwatch -n1 "zpool iostat -yl". Seems to be a little copy-pasta error. ;]