Skip to content

Instantly share code, notes, and snippets.

@jondurbin
Created July 26, 2023 12:39
Show Gist options
  • Save jondurbin/87fc040b92a3073125ed516b04bc6e19 to your computer and use it in GitHub Desktop.
Save jondurbin/87fc040b92a3073125ed516b04bc6e19 to your computer and use it in GitHub Desktop.

Fork of qlora: https://github.com/jondurbin/qlora

Make sure to change dataset format, and dataset path to your file, along with model/output paths.

If you want to modify the prompt format, edit this: https://github.com/jondurbin/qlora/blob/main/qlora.py#L433

Args used:

python qlora.py \
	--model_name_or_path /data/llama-2-70b-hf \
	--output_dir /data/airoboros-70b-gpt4-1.4-checkpoints \
	--num_train_epochs 3 \
	--logging_steps 1 \
	--save_strategy steps \
	--save_steps 700 \
	--save_total_limit 10 \
	--data_seed 11422 \
	--evaluation_strategy no \
	--eval_dataset_size 2 \
	--max_new_tokens 4096 \
	--dataloader_num_workers 3 \
	--logging_strategy steps \
	--remove_unused_columns False \
	--do_train \
	--lora_r 64 \
	--lora_alpha 16 \
	--lora_modules all \
	--double_quant \
	--quant_type nf4 \
	--bf16 \
	--bits 4 \
	--warmup_ratio 0.03 \
	--lr_scheduler_type constant \
	--dataset /data/instructions.jsonl \
	--dataset_format airoboros \
	--model_max_len 4096 \
	--per_device_train_batch_size 2 \
	--learning_rate 0.0001 \
	--adam_beta2 0.999 \
	--max_grad_norm 0.3 \
	--lora_dropout 0.05 \
	--weight_decay 0.0 \
	--seed 11422 \
	--report_to wandb \
	--gradient_accumulation_steps 16 \
	--gradient_checkpointing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment