Skip to content

Instantly share code, notes, and snippets.

@fsndzomga
Created October 30, 2023 16:12
Show Gist options
  • Save fsndzomga/248212d6a16d6715d16f57019775bdb2 to your computer and use it in GitHub Desktop.
Save fsndzomga/248212d6a16d6715d16f57019775bdb2 to your computer and use it in GitHub Desktop.
finetuning llama2-7b
model_type: llm
base_model: meta-llama/Llama-2-7b-hf
quantization:
bits: 4
adapter:
type: lora
prompt:
template: |
### Instruction:
{instruction}
### Input:
{input}
### Response:
input_features:
- name: prompt
type: text
output_features:
- name: output
type: text
trainer:
type: finetune
learning_rate: 0.0001
batch_size: 1
gradient_accumulation_steps: 16
epochs: 3
learning_rate_scheduler:
warmup_fraction: 0.01
preprocessing:
sample_ratio: 0.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment