Skip to content

Instantly share code, notes, and snippets.

View Pliman's full-sized avatar
🎯
Focusing

Sun bin Pliman

🎯
Focusing
View GitHub Profile
@shawwn
shawwn / example.sh
Created March 6, 2023 05:17
How I run 65B using my fork of llama at https://github.com/shawwn/llama
mp=1; size=7B; # to run 7B
mp=8; size=65B; # to run 65B
for seed in $(randint 1000000)
do
export TARGET_FOLDER=~/ml/data/llama/LLaMA
time python3 -m torch.distributed.run --nproc_per_node $mp example.py --ckpt_dir $TARGET_FOLDER/$size --tokenizer_path $TARGET_FOLDER/tokenizer.model --seed $seed --max_seq_len 2048 --max_gen_len 2048 --count 0 | tee -a ${size}_startrek.txt
done