Skip to content

Instantly share code, notes, and snippets.

View xdays's full-sized avatar
🏠
Working from home

xdays

🏠
Working from home
View GitHub Profile
@adrienbrault
adrienbrault / llama2-mac-gpu.sh
Last active April 8, 2025 13:49
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM. UPDATE: see https://twitter.com/simonw/status/1691495807319674880?s=20
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
make clean
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
@movalex
movalex / install_qbittorrent-nox_macos.md
Last active May 7, 2025 03:18
Install qbittorrent-nox on macos 12+

Requirements

  • zlib >= 1.2.11
  • Qt 6.5.0
    • Despite building headless application, the QT library should be installed
  • CMake >= 3.16
  • Boost >= 1.76
  • libtorrent-rasterbar 1.2.19
  • Python >= 3.7.0