This document explains in detail how we fine-tune the NousResearch/Llama-2-7b-chat-hf
model on a financial tweet sentiment dataset using the QLoRA method. The training is done using Hugging Face Transformers, PEFT (LoRA), and bitsandbytes for 4-bit quantization.
model_name = "NousResearch/Llama-2-7b-chat-hf"