Skip to content

Instantly share code, notes, and snippets.

View andrewssobral's full-sized avatar
🔴
I may be very slow to respond.

Andrews Cordolino Sobral andrewssobral

🔴
I may be very slow to respond.
View GitHub Profile
@andrewssobral
andrewssobral / metal_in_python.py
Created August 11, 2024 07:50 — forked from awni/metal_in_python.py
Compile and call a Metal GPU kernel from Python
# Requires:
# pip install pyobjc-framework-Metal
import numpy as np
import Metal
# Get the default GPU device
device = Metal.MTLCreateSystemDefaultDevice()
# Make a command queue to encode command buffers to
command_queue = device.newCommandQueue()
@andrewssobral
andrewssobral / generate_codebase.sh
Created July 31, 2024 12:56
Bash Script for Codebase Generation to be used with LLMs
#!/bin/bash
# set -x # Enable debug mode
set -e # Exit immediately if a command exits with a non-zero status.
echo "Script started" >&2
# Function to get absolute path (works on macOS and Linux)
get_abs_path() {
local path="$1"
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@andrewssobral
andrewssobral / sft_trainer.py
Created October 10, 2023 21:16 — forked from lewtun/sft_trainer.py
Fine-tuning Mistral 7B with TRL & DeepSpeed ZeRO-3
# This is a modified version of TRL's `SFTTrainer` example (https://github.com/huggingface/trl/blob/main/examples/scripts/sft_trainer.py),
# adapted to run with DeepSpeed ZeRO-3 and Mistral-7B-V1.0. The settings below were run on 1 node of 8 x A100 (80GB) GPUs.
#
# Usage:
# - Install the latest transformers & accelerate versions: `pip install -U transformers accelerate`
# - Install deepspeed: `pip install deepspeed==0.9.5`
# - Install TRL from main: pip install git+https://github.com/huggingface/trl.git
# - Clone the repo: git clone github.com/huggingface/trl.git
# - Copy this Gist into trl/examples/scripts
# - Run from root of trl repo with: accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml --gradient_accumulation_steps 8 examples/scripts/sft_trainer.py
@andrewssobral
andrewssobral / gist:39c40695df7e414c72309f6e79d14c0a
Created July 29, 2023 20:07 — forked from seddonm1/gist:5927db05cb7ad38d98a22674fa82a4c6
How to build onnxruntime on an aarch64 NVIDIA device (like Jetson Orin AGX)
docker run \
--rm \
-it \
-e ONNXRUNTIME_REPO=https://github.com/microsoft/onnxruntime \
-e ONNXRUNTIME_COMMIT=v1.15.1 \
-e BUILD_CONFIG=Release \
-e CMAKE_VERSION=3.26.4 \
-e CPU_ARCHITECTURE=$(uname -m) \
-e CUDA_ARCHITECTURES="70;75;80;86" \
-v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra:ro \
@andrewssobral
andrewssobral / llama2-mac-gpu.sh
Created July 27, 2023 15:20 — forked from adrienbrault/llama2-mac-gpu.sh
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}"
@andrewssobral
andrewssobral / async_openai_requests.py
Created April 24, 2023 09:11
Asynchronous Requests with asyncio
import openai
import asyncio
from typing import Any
async def dispatch_openai_requests(
messages_list: list[list[dict[str,Any]]],
model: str,
temperature: float,
max_tokens: int,
top_p: float,
@andrewssobral
andrewssobral / x11_forwarding_macos_docker.md
Created March 3, 2023 20:00 — forked from sorny/x11_forwarding_macos_docker.md
X11 forwarding with macOS and Docker

X11 forwarding on macOS and docker

A quick guide on how to setup X11 forwarding on macOS when using docker containers requiring a DISPLAY. Works on both Intel and M1 macs!

This guide was tested on:

  • macOS Catalina 10.15.4
  • docker desktop 2.2.0.5 (43884) - stable release
  • XQuartz 2.7.11 (xorg-server 1.18.4)
  • Macbook Pro (Intel)
@andrewssobral
andrewssobral / m3u8-to-mp4.md
Created March 25, 2022 22:54 — forked from tzmartin/m3u8-to-mp4.md
m3u8 stream to mp4 using ffmpeg

1. Copy m3u8 link

Alt text

2. Run command

echo "Enter m3u8 link:";read link;echo "Enter output filename:";read filename;ffmpeg -i "$link" -bsf:a aac_adtstoasc -vcodec copy -c copy -crf 50 $filename.mp4