2 Nvideia GPU A100 with 40 Gb capacity which supports MIG
command
nvidia-smi -L-> Docker is configured to use Nvidia container runtime, meaning docker can access GPU slices using nvidia runtime
cat .config/docker/daemon.jsonkubectl get pods -n nvidia-dra-driver^this take care of allocation of GPUs
nvidia-smi -i 0 -mig 1nvidia-smi -i 0 -q sudo nvidia-smi mig -lgip- lists all the available Multi-Instance GPU (MIG) profiles, decide based on your resource claims of the pods
sudo nvidia-smi mig -cgi 19,19,20,5,0 -C
these are ids
nvidia-smiList MIG and their sizes
nvidia-smi -Lsudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smisudo nvidia-smi mig -lgiMain repo: https://github.com/NVIDIA/k8s-dra-driver
Share example to define resource claims: https://github.com/NVIDIA/k8s-dra-driver/blob/main/demo/specs/quickstart/gpu-test5.yaml