Created
May 15, 2024 16:47
-
-
Save mtisz/2e9f7d8acb1a65f0b58f2427a402f387 to your computer and use it in GitHub Desktop.
Axolotl Config for Llama-3-70B QLoRA
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
base_model: meta-llama/Meta-Llama-3-70B | |
model_type: LlamaForCausalLM | |
tokenizer_type: AutoTokenizer | |
load_in_8bit: false | |
load_in_4bit: true | |
strict: false | |
datasets: | |
- path: /home/migel/ai_datasets/tess-v1.5b-chatml.jsonl | |
type: sharegpt | |
conversation: llama3 | |
chat_template: llama3 | |
adapter: qlora | |
lora_r: 128 | |
lora_alpha: 16 | |
lora_modules_to_save: [embed_tokens, lm_head] | |
lora_dropout: 0.05 | |
lora_target_linear: true | |
dataset_prepared_path: last_run_prepared | |
val_set_size: 0.05 | |
output_dir: /home/migel/whiterabbitneo-llama3-70B | |
sequence_len: 4096 | |
sample_packing: true | |
pad_to_sequence_len: true | |
wandb_project: llama-3 | |
wandb_watch: | |
wandb_run_id: | |
wandb_log_model: | |
gradient_accumulation_steps: 4 | |
micro_batch_size: 3 | |
num_epochs: 2 | |
optimizer: adamw_8bit | |
lr_scheduler: constant | |
learning_rate: 1e-5 | |
train_on_inputs: false | |
group_by_length: false | |
bf16: auto | |
fp16: | |
tf32: false | |
gradient_checkpointing: true | |
gradient_checkpointing_kwargs: | |
use_reentrant: false | |
early_stopping_patience: | |
resume_from_checkpoint: | |
logging_steps: 1 | |
xformers_attention: | |
flash_attention: true | |
warmup_steps: 100 | |
evals_per_epoch: 5 | |
eval_table_size: | |
saves_per_epoch: 5 | |
save_total_limit: 10 | |
save_steps: | |
debug: | |
deepspeed: /home/migel/axolotl/deepspeed_configs/zero3_bf16.json | |
weight_decay: 0.00 | |
fsdp: | |
fsdp_config: | |
special_tokens: | |
pad_token: "<|end_of_text|>" |
Hey, this is optimized for 320GB VRAM. You can play around with micro_batch_size, gradient_accumulation_steps and lora_r to suit your needs.
Sweet. This is really helpful. Thanks!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
How much VRAM is needed for fine-tuning using this config?