Skip to content

Instantly share code, notes, and snippets.

View wrh's full-sized avatar

Wendelin Reich wrh

View GitHub Profile
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active August 15, 2024 07:10
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin