Table of Contents
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
cmake_minimum_required(VERSION 3.27) | |
project(_ext LANGUAGES CXX) | |
# ----------------------------- Setup ----------------------------- | |
set(CMAKE_CXX_STANDARD 17) | |
set(CMAKE_CXX_STANDARD_REQUIRED ON) | |
set(CMAKE_POSITION_INDEPENDENT_CODE ON) | |
option(BUILD_SHARED_LIBS "Build as a shared library" ON) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# ----------------------------------------------------------------------------- | |
# AI-powered Git Commit Function | |
# Copy paste this gist into your ~/.bashrc or ~/.zshrc to gain the `gcm` command. It: | |
# 1) gets the current staged changed diff | |
# 2) sends them to an LLM to write the git commit message | |
# 3) allows you to easily accept, edit, regenerate, cancel | |
# But - just read and edit the code however you like | |
# the `llm` CLI util is awesome, can get it here: https://llm.datasette.io/en/stable/ | |
gcm() { |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Requires: | |
# pip install pyobjc-framework-Metal | |
import numpy as np | |
import Metal | |
# Get the default GPU device | |
device = Metal.MTLCreateSystemDefaultDevice() | |
# Make a command queue to encode command buffers to | |
command_queue = device.newCommandQueue() |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# This is a modified version of TRL's `SFTTrainer` example (https://github.com/huggingface/trl/blob/main/examples/scripts/sft_trainer.py), | |
# adapted to run with DeepSpeed ZeRO-3 and Mistral-7B-V1.0. The settings below were run on 1 node of 8 x A100 (80GB) GPUs. | |
# | |
# Usage: | |
# - Install the latest transformers & accelerate versions: `pip install -U transformers accelerate` | |
# - Install deepspeed: `pip install deepspeed==0.9.5` | |
# - Install TRL from main: pip install git+https://github.com/huggingface/trl.git | |
# - Clone the repo: git clone github.com/huggingface/trl.git | |
# - Copy this Gist into trl/examples/scripts | |
# - Run from root of trl repo with: accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml --gradient_accumulation_steps 8 examples/scripts/sft_trainer.py |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
docker run \ | |
--rm \ | |
-it \ | |
-e ONNXRUNTIME_REPO=https://github.com/microsoft/onnxruntime \ | |
-e ONNXRUNTIME_COMMIT=v1.15.1 \ | |
-e BUILD_CONFIG=Release \ | |
-e CMAKE_VERSION=3.26.4 \ | |
-e CPU_ARCHITECTURE=$(uname -m) \ | |
-e CUDA_ARCHITECTURES="70;75;80;86" \ | |
-v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra:ro \ |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Clone llama.cpp | |
git clone https://github.com/ggerganov/llama.cpp.git | |
cd llama.cpp | |
# Build it | |
LLAMA_METAL=1 make | |
# Download model | |
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin | |
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}" |
A quick guide on how to setup X11 forwarding on macOS when using docker containers requiring a DISPLAY. Works on both Intel and M1 macs!
This guide was tested on:
- macOS Catalina 10.15.4
- docker desktop 2.2.0.5 (43884) - stable release
- XQuartz 2.7.11 (xorg-server 1.18.4)
- Macbook Pro (Intel)