Skip to content

Instantly share code, notes, and snippets.

@sean-smith
Last active July 30, 2020 00:19
Show Gist options
  • Save sean-smith/247e0fa406dd357e57c4aeb2afe49f55 to your computer and use it in GitHub Desktop.
Save sean-smith/247e0fa406dd357e57c4aeb2afe49f55 to your computer and use it in GitHub Desktop.
Scheduler Cheatsheet for AWS ParallelCluster

General

sge slurm torque
Submit Interactive Job qlogin srun qsub -I
Submit Batch Job qsub sbatch qsub
Number of Slots -pe mpi [n] -n [n] -l ppn=[n]
Number of Nodes -pe mpi [slots * n] -N [n] -l nodes=[n]
Cancel Job qdel scancel qdel
See Queue qstat squeue qstat
See Nodes qhost sinfo -N pbsnodes
Script Directive #$ #SBATCH #PBS
Job Name -N [name] --job-name [name] -N [name]

Slurm

Submit jobs with:

$ sbatch
$ srun

See the queue with:

$ squeue

Cancel Jobs with:

$ scancel [jobid]

See the nodes details:

$ sinfo -Nl
Fri May  8 16:49:47 2020
NODELIST        NODES PARTITION       STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON              
ip-10-0-10-70       1  compute*        idle   72   72:1:1      1        0      1   (null) none                
ip-10-0-10-133      1  compute*        idle   72   72:1:1      1        0      1   (null) none    

Example job submission script:

#!/bin/bash
#SBATCH --job-name=test
#SBATCH --ntasks=8
#SBATCH --output=%x_%j.out
module load openmpi
mpirun job

Run Job on All Nodes

SGE

NODES=30
CORES=36
qsub -t 1-$NODES:1 -pe mpi $NODES*$CORES job.sh

SLURM

NODES=30
srun --ntasks-per-node 1 --ntasks $NODES job.sh

TORQUE

qsub -l nodes=$NODES:ppn=1 job.sh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment