Skip to content

Instantly share code, notes, and snippets.

View andrewssobral's full-sized avatar
🔴
I may be very slow to respond.

Andrews Cordolino Sobral andrewssobral

🔴
I may be very slow to respond.
View GitHub Profile
@andrewssobral
andrewssobral / Makefile
Created February 13, 2025 02:26 — forked from rafaelrc7/Makefile
Generic Makefile for c/cxx/asm
######################### Preamble ###########################################
SHELL := bash
.ONESHELL:
.SHELLFLAGS := -eu -o pipefail -c
.DELETE_ON_ERROR:
.SECONDEXPANSION:
MAKEFLAGS += --warn-undefined-variables
MAKEFLAGS += --no-builtin-rules
MAKEFLAGS += -j$(shell nproc)
@andrewssobral
andrewssobral / 30s-terminal-tools.md
Created December 3, 2024 22:21 — forked from wilderlopes/30s-terminal-tools.md
List of terminal-based developer tools that deliver value in 30 seconds

Setting Up MCP Servers on Windows

A step-by-step guide to setting up Model Context Protocol (MCP) servers for Claude Desktop on Windows.

Prerequisites

  1. Install Node.js (v18.x or later)
    • Download from: https://nodejs.org/
    • Verify installation by opening PowerShell and running:
      node --version

npm --version

@andrewssobral
andrewssobral / claude-autoclicker.sh
Created December 1, 2024 23:23 — forked from supersational/claude-autoclicker.sh
Claude Autoclick "Allow Tool"
while true; do
osascript -e '
tell application "System Events"
if exists process "Claude" then
tell process "Claude"
if exists button "Allow for This Chat" of group 2 of group 1 of group 1 of group 1 of UI element 2 of group 1 of group 1 of group 1 of group 1 of window "Claude" then
click button "Allow for This Chat" of group 2 of group 1 of group 1 of group 1 of UI element 2 of group 1 of group 1 of group 1 of group 1 of window "Claude"
log "clicked allow button"
end if
end tell
@andrewssobral
andrewssobral / Matrix.md
Created November 29, 2024 13:39 — forked from nadavrot/Matrix.md
Efficient matrix multiplication

High-Performance Matrix Multiplication

This is a short post that explains how to write a high-performance matrix multiplication program on modern processors. In this tutorial I will use a single core of the Skylake-client CPU with AVX2, but the principles in this post also apply to other processors with different instruction sets (such as AVX512).

Intro

Matrix multiplication is a mathematical operation that defines the product of

@andrewssobral
andrewssobral / CMakeLists.txt
Created October 23, 2024 22:00 — forked from awni/CMakeLists.txt
Minimal MLX CMake
cmake_minimum_required(VERSION 3.27)
project(_ext LANGUAGES CXX)
# ----------------------------- Setup -----------------------------
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
option(BUILD_SHARED_LIBS "Build as a shared library" ON)
@andrewssobral
andrewssobral / add_to_zshrc.sh
Created September 1, 2024 21:41 — forked from karpathy/add_to_zshrc.sh
Git Commit Message AI
# -----------------------------------------------------------------------------
# AI-powered Git Commit Function
# Copy paste this gist into your ~/.bashrc or ~/.zshrc to gain the `gcm` command. It:
# 1) gets the current staged changed diff
# 2) sends them to an LLM to write the git commit message
# 3) allows you to easily accept, edit, regenerate, cancel
# But - just read and edit the code however you like
# the `llm` CLI util is awesome, can get it here: https://llm.datasette.io/en/stable/
gcm() {
@andrewssobral
andrewssobral / metal_in_python.py
Created August 11, 2024 07:50 — forked from awni/metal_in_python.py
Compile and call a Metal GPU kernel from Python
# Requires:
# pip install pyobjc-framework-Metal
import numpy as np
import Metal
# Get the default GPU device
device = Metal.MTLCreateSystemDefaultDevice()
# Make a command queue to encode command buffers to
command_queue = device.newCommandQueue()
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@andrewssobral
andrewssobral / sft_trainer.py
Created October 10, 2023 21:16 — forked from lewtun/sft_trainer.py
Fine-tuning Mistral 7B with TRL & DeepSpeed ZeRO-3
# This is a modified version of TRL's `SFTTrainer` example (https://github.com/huggingface/trl/blob/main/examples/scripts/sft_trainer.py),
# adapted to run with DeepSpeed ZeRO-3 and Mistral-7B-V1.0. The settings below were run on 1 node of 8 x A100 (80GB) GPUs.
#
# Usage:
# - Install the latest transformers & accelerate versions: `pip install -U transformers accelerate`
# - Install deepspeed: `pip install deepspeed==0.9.5`
# - Install TRL from main: pip install git+https://github.com/huggingface/trl.git
# - Clone the repo: git clone github.com/huggingface/trl.git
# - Copy this Gist into trl/examples/scripts
# - Run from root of trl repo with: accelerate launch --config_file=examples/accelerate_configs/deepspeed_zero3.yaml --gradient_accumulation_steps 8 examples/scripts/sft_trainer.py