NOTE: These instructions have migrated to Kitzes lab lab docs
First, make sure you have ssh set up by loging into xsede website and setting up dual authentication.
login to bridges2.psc.edu
via ssh directly (default port 22) instead of through xsede
sinfo
sacctmgr show cluster
check a node: scontrol show node r007
Check our allocations: projects
find available software
module spider
module avail python
load python
module load anaconda3/2020.07
load additional packages
pip3 install --user virtualenvwrapper
pip3 install --user poetry
now run which virtualenvwrapper_lazy.sh
to check that it is in /.local/bin/
then open your ~/.bashrc and add these lines:
export PATH=$PATH:~/.local/bin #or wherever virtualenvwrapper_lazy.sh is
source virtualenvwrapper_lazy.sh
export WORKON_HOME=~/.cache/pypoetry/virtualenvs
to deactivate the environment:
deactivate
just this time: run source ~/.bashrc
since it isn't sourced yet
use virtualenvwrapper as normal. For instance, create an environment:
mkvirtualenv testenv
navigate to home directory, since that's where we want to keep opso. Use poetry to build dependencies in a new environment.
cd ~ #go to home
git clone https://github.com/kitzeslab/opensoundscape.git #get opso
cd opensoundscape #enter directory
#if necessary, check out a branch, eg 'git checkout develop'
poetry build # set up poetry environment
poetry install #create an environment with all dependencies and opensoundscape
we will typically use two types (these notes are from bridges1, use Bridges2 docs for info):
RSM with 128 G ram - analagous to SMP
AI-GPU (V100's) - for GPU
- RSM-GPU (the P100 cards) might be good for us, 16 cores per GPU
Two places to typically store files
Home space: $HOME - for small stuff like scripts
$PROJECTS - storage (10Tb)
Node-local ($LOCAL): disk that can be used from within a node on a job (temporary! cleared when job ends)
Can use globus to transfer files (follow website instructions to add endpoint, or ask Sam to do it)
make a slurm script, probably in your home directory. it might look like this:
#!/usr/bin/env bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --partition=RM
#SBATCH --time=00:05:00
#SBATCH --output=log.out
module load anaconda3/2019.10
workon opensoundscape-QPriMHYU-py3.7
#do stuff
opensoundscape -h
don't run module purge
. If you do, you'll need to get sbatch command back with module load slurm
submit a slurm job
sbatch script.slurm
view current jobs
squeue -u [USER]
cancel job (get job number from squeue)
scancel [job#]
A replacement for JupyterHub / R Studio - they are setting this up currently (Jan 2021)