Skip to content

Instantly share code, notes, and snippets.

@surya00060
Last active August 25, 2025 20:59
Show Gist options
  • Save surya00060/e94ce2dac05c7f39e152e1bbb9333839 to your computer and use it in GitHub Desktop.
Save surya00060/e94ce2dac05c7f39e152e1bbb9333839 to your computer and use it in GitHub Desktop.
SLURM job submission on Gilbreth Cluster
#!/bin/bash -l
#SBATCH --nodes=1 #Number of nodes
#SBATCH --ntasks-per-node=24 #Number of CPUs per node. 96 CPUs in 1 node
#SBATCH --gres=gpu:1 #Number of GPUs. We have 4 H100s in 1 node.
#SBATCH --partition=araghu #Always set to araghu
#SBATCH --mem=240G #Amount of memory required. Maximum is 2TB
#SBATCH --time=2-00:00:00 #Maximum Time Limit
#SBATCH -A araghu #Queue name: araghu or araghu-scale, whichever you have access to.
#SBATCH -J job_name
#SBATCH --output=job_output.out #Job output file
echo "=== Job $SLURM_JOB_ID starting at $(date +'%Y-%m-%d %H:%M:%S') ==="
module load conda
conda activate /depot/araghu/data/selvams/CondaEnvs/Colbert-RAG
python xx_yy.py
echo "=== Job $SLURM_JOB_ID starting at $(date +'%Y-%m-%d %H:%M:%S') ==="
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment