info 9-3-23 Added 4bit LLaMA install instructions for cards as small as 6GB VRAM! (See "BONUS 4" at the bottom of the guide)
warning 9-3-23 Added Torrent for HFv2 Model Weights, required for ooga's webUI, Kobold, Tavern and 4bit (+4bit model)! Update ASAP!
danger 11-3-23 There's a new torrent version of the 4bit weights called "LLaMA-HFv2-4bit". The old "LLaMA-4bit" torrent may be fine. But if you have any issues with it, it's recommended to update to the new 4bit torrent or use the decapoda-research versions off of HuggingFace or produce your own 4bit weights. Newer Torrent Link or Newer Magnet Link
Want to fit the most model in the amount of VRAM you have, if that's a little or a lot? Look no further.
Q: Doesn't 4bit have worse output performance than 8bit or 16bit? A: No, while RTN 8bit does reduce output quality, GPTQ 4bit has effectively NO output quality loss compared to baseline uncompressed fp16. Additionally, GPTQ 3bit (coming soon) has negligible output quality loss which goes down as model size goes up!
Q: How many tokens per second is 2it/s? A: Tokens per "iteration" (it) depends on the implementation. In ooba's webUI 1 "it" is 8 words/tokens. So 2it/s is 16 tokens/words per second!
Model | VRAM Used | Minimum Total VRAM | Card examples | RAM/Swap to Load* |
---|---|---|---|---|
LLaMA-7B | 9.2GB | 10GB | 3060 12GB, RTX 3080 10GB, RTX 3090 | 24 GB |
LLaMA-13B | 16.3GB | 20GB | RTX 3090 Ti, RTX 4090 | 32GB |
LLaMA-30B | 36GB | 40GB | A6000 48GB, A100 40GB | 64GB |
LLaMA-65B | 74GB | 80GB | A100 80GB | 128GB |
*System RAM (not VRAM) required to load the model, in addition to having enough VRAM. NOT required to RUN the model. You can use swap space if you do not have enough RAM.
Model | Model Size | Minimum Total VRAM | Card examples | RAM/Swap to Load* |
---|---|---|---|---|
LLaMA-7B | 3.5GB | 6GB | RTX 1660, 2060, AMD 5700xt, RTX 3050, 3060 | 16 GB |
LLaMA-13B | 6.5GB | 10GB | AMD 6900xt, RTX 2060 12GB, 3060 12GB, 3080, A2000 | 32 GB |
LLaMA-30B | 15.8GB | 20GB | RTX 3080 20GB, A4500, A5000, 3090, 4090, 6000, Tesla V100 | 64 GB |
LLaMA-65B | 31.2GB | 40GB | A100 40GB, 2x3090, 2x4090, A40, RTX A6000, 8000, Titan Ada | 128 GB |
*System RAM (not VRAM) required to load the model, in addition to having enough VRAM. NOT required to RUN the model. You can use swap space if you do not have enough RAM.
8bit: Easier setup, lower output quality (due to RTN), recommended for first-timers 4bit: Faster, smaller, higher output quality (due to GPTQ), but more difficult setup
It's recommended to start with setting up 8bit. Once 8bit is working you can come back to read "BONUS 4" on setting up 4bit.
To continue with 8bit setup, just keep reading.
All you need to get started is to install https://github.com/oobabooga/text-generation-webui using "Installation option 1: conda".
But wait, there's one more thing. You need the MODEL WEIGHTS. But you don't need just any LLaMA model weights.
The original leaked weights won't work. You need the "HFv2" (HuggingFace version 2) converted model weights. You can get them by using this torrent or this magnet link *If you have the old weights and really want to convert them yourself, scroll to the bottom of this guide for instructions.
If you already have some weights and are not sure if they're the right ones, here's how you can tell.
The WRONG original leaked weights have filenames that look like:
consolidated.00.pth
consolidated.01.pth
The CORRECT "HF Converted" weights have filenames that look like:
pytorch_model-00001-of-00033.bin
pytorch_model-00002-of-00033.bin
pytorch_model-00003-of-00033.bin
pytorch_model-00004-of-00033.bin
Put them in text-generation-webui/models/LLaMA-7B
Now, from a command prompt in the text-generation-webui directory, run:
conda activate textgen
python server.py --model LLaMA-7B --load-in-8bit --no-stream
*
and GO!
*Replace LLaMA-7B
with the model you're using in the command above.
Okay, I got 8bit working now take me to the 4bit setup instructions.
Disable token streaming (--no-stream
in ooba's webui).
Install bitsandbytes (Windows only)
- Download these 2 dll files: https://github.com/DeXtmL/bitsandbytes-win-prebuilt/raw/main/libbitsandbytes_cpu.dll https://github.com/DeXtmL/bitsandbytes-win-prebuilt/raw/main/libbitsandbytes_cuda116.dll
- Move those files into
KoboldAI\miniconda3\python\Lib\site-packages\bitsandbytes
- Now edit KoboldAI\miniconda3\python\Lib\site-packages\bitsandbytes\cuda_setup\main.py with these:
- Change
ct.cdll.LoadLibrary(binary_path)
toct.cdll.LoadLibrary(str(binary_path))
two times in the file. - Then replace
if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None
withif torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None
After that you should be able to load models with 8-bit precision.
If you run into trouble, ask for help at oobabooga/text-generation-webui#147
KoboldAI GitHub: https://github.com/KoboldAI/KoboldAI-Client
KoboldAI also requires the HFv2 converted model weights in the torrent above.
Simply place the weights in KoboldAI/models/Facebook_LLaMA-7b/
(or 13b
30b
65b
depending on your model)
Until KoboldAI merges the patch to support these weights you'll have to patch it yourself. Follow the steps below to do that.
How to patch KoboldAI for LLaMA support
Install KoboldAI 8bit
Get KoboldAI 8bit from: https://github.com/ebolam/KoboldAI/tree/8bit
Install it using git clone -b 8bit https://github.com/ebolam/KoboldAI/
(You cannot use the windows installer or zip file. You must install using git clone or it will not work.)
This enables 8bit/int8 support for all Kobold models, not just LLaMA. Now you'll need to add the LLaMA transformers patch to Kobold.
Open the KoboldAI command line
Open KoboldAI/commandline.bat
to be presented with the KobaldAI command line. (commandline.sh
on linux)
Your cmd window should look something like:
(base) C:\KoboldAI
(Note the (base)
at the beginning. If you have this, you're good to go to the next step.)
Install the modified LLaMA transformers inside Kobold
Now run pip install --upgrade --force-reinstall git+https://github.com/zphang/transformers@llama_push
This will install the transformers into the conda
environment which KoboldAI runs from.
Run KoboldAI
Run KoboldAI as normal and select AI > load a Model from its directory > Facebook_LLaMA-7b
Enjoy!
!!! info If you have issues with KoboldAI, go to their Discord: https://koboldai.org/discord
TavernAI GitHub: https://github.com/TavernAI/TavernAI How to connect Tavern to Kobold with LLaMA (Tavern relies on Kobold to run LLaMA. Follow all of the KoboldAI steps first.)
- With KoboldAI running and the LLaMA model loaded in the KoboldAI webUI, open TavernAI.
- Ensure TavernAI's API setting is pointing at your local machine (127.0.0.1).
- Pick a character and start chatting.
That's it! No further configuration is necessary. Enjoy!
info If you have issues with TavernAI, go to their Discord: https://discord.com/invite/zmK2gmr45t
Already have the old weights and don't want to download the new ones? You can convert them to HF weights yourself.
warning If you manually converted before 9 March 2023, it is recommended to update to the latest version of the PR and re-convert or download the HFv2 weights. All projects now require HFv2 weights. HF weights converted with the original conversion script will cause errors and odd behavior.
How to convert the old weights to HF weights
-
Grab the original weights using this torrent file or this magnet link
-
Grab the conversion script from this PR: huggingface/transformers#21955
-
Run:
python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
(Note: Change the command above to point at your model. Also pay attention to the model_size setting.) -
Wait a while for the process to complete.
-
Transfer all of the new model files to a single folder, including tokenizer.model, tokenizer_config.json etc.
You're done.
4bit has NO reduction in output quality vs 16bit (thanks to GPTQ) while substantially reducing VRAM requirements
- Acquire the updated HFv2 weights by using the following torrent, or by converting yourself from original FB weights Torrent File Magnet Link
- Verify that you have 8bit LLaMA working in ooba's webUI per the instructions above, first.* *(If you have under 10GB of VRAM then just skip straight to step 2)
- Acquire the 4bit weights from: Newer Torrent File Newer Magnet Link LLaMA-7B int4 DDL: https://huggingface.co/decapoda-research/llama-7b-hf-int4/resolve/main LLaMA-13B int4 DDL: https://huggingface.co/decapoda-research/llama-13b-hf-int4/tree/main LLaMA-30B int4 DDL: https://huggingface.co/decapoda-research/llama-30b-hf-int4/tree/main LLaMA-65B int4 DDL: https://huggingface.co/decapoda-research/llama-65b-hf-int4/tree/main
- (Windows only) Install Visual Studio 2019 with C++ build-tools before completing 4-bit setup below, per this comment on the 4bit repo
- Open a command line in the text-generation-webui directory and run
conda activate textgen
- Now continue to follow the installation instructions at https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode while running all commands from inside the (textgen) conda environment
- Enjoy 4bit LLaMA with a webUI
You need #3 for 16bit or 8bit and BOTH #3 & #4 for 4bit!
#1 is only if you want to convert HF weights yourself
Torrent: https://files.catbox.moe/oyy6vh.torrent
Magnet: magnet:?xt=urn:btih:b8287ebfa04f879b048d4d4404108cf3e8014352&dn=LLaMA&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce
Torrent: Nobody needs these anymore.
Magnet: Update to HFv2 or things will break.
Torrent: https://files.catbox.moe/wbzpkx.torrent
Magnet: magnet:?xt=urn:btih:dc73d45db45f540aeb6711bdc0eb3b35d939dcb4&dn=LLaMA-HFv2&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce
Newer Torrent Newer Magnet Old 4bit Torrent Old 4bit Magnet Link LLaMA-7B int4 DDL: https://huggingface.co/decapoda-research/llama-7b-hf-int4/resolve/main LLaMA-13B int4 DDL: https://huggingface.co/decapoda-research/llama-13b-hf-int4/tree/main LLaMA-30B int4 DDL: https://huggingface.co/decapoda-research/llama-30b-hf-int4/tree/main LLaMA-65B int4 DDL: https://huggingface.co/decapoda-research/llama-65b-hf-int4/tree/main
(This is a backup copy.)
i want to use https://github.com/lm-sys/FastChat with llama weights - what is best torrent option? I'm not fussed with 4bit stuff.