- Request access to one of the llama2 model repositories from Meta's HuggingFace organization, for example the
Llama-2-13b-chat-hf
. - Generate a HuggingFace read-only access token from your user profile settings page.
- Setup a Python 3.10 enviornment with the following dependencies installed:
transformers, huggingface_hub
. - Run the following code to download and load the model in HuggingFace
transformers
:
TOKEN = # copy-paste your HuggingFace access token here
### Option 1