Skip to content

Instantly share code, notes, and snippets.

View vladimiralencar's full-sized avatar
🏠
Working from home

Vladimir Costa de Alencar vladimiralencar

🏠
Working from home
View GitHub Profile
@vladimiralencar
vladimiralencar / llama-2-setup.md
Created August 10, 2024 00:43 — forked from zachschillaci27/llama-2-setup.md
Download and run llama-2 locally

Option 1 (easy): HuggingFace Hub Download

  1. Request access to one of the llama2 model repositories from Meta's HuggingFace organization, for example the Llama-2-13b-chat-hf.
  2. Generate a HuggingFace read-only access token from your user profile settings page.
  3. Setup a Python 3.10 enviornment with the following dependencies installed: transformers, huggingface_hub.
  4. Run the following code to download and load the model in HuggingFace transformers:
TOKEN = # copy-paste your HuggingFace access token here

### Option 1
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.