Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save feniix/14e8de4b781122dd13561cdfa4ad37cc to your computer and use it in GitHub Desktop.
Save feniix/14e8de4b781122dd13561cdfa4ad37cc to your computer and use it in GitHub Desktop.
Simple working example of an Argo Workflow configured to run on the GPU nodes
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: cuda-vector-add-
spec:
entrypoint: main
templates:
- name: main
# requires this pod to be run on an nvidia.com/gpu labeled node
nodeSelector:
nvidia.com/gpu: "true"
# allows this pod to be run on an nvidia.com/gpu tainted node
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
- key: gpu
value: "true"
operator: Equal
effect: NoSchedule
container:
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
# don't use requests for GPUs: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/
limits:
nvidia.com/gpu: 1 # requesting 1 GPU
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment