Skip to content

Instantly share code, notes, and snippets.

View 0xBigBoss's full-sized avatar
🤠

Big Boss 0xBigBoss

🤠
View GitHub Profile
@0xBigBoss
0xBigBoss / grpo_demo.py
Created January 31, 2025 00:53 — forked from willccbb/grpo_demo.py
GRPO Llama-1B
# train_grpo.py
import re
import torch
from datasets import load_dataset, Dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import LoraConfig
from trl import GRPOConfig, GRPOTrainer
# Load and prep dataset
-- Find indexes on the table
SELECT indexname, tablename
FROM pg_indexes
WHERE tablename = '' || ${table_name} || '';
-- Find foreign key relationships
SELECT
tc.table_schema,
tc.constraint_name,
tc.table_name,
@0xBigBoss
0xBigBoss / README.md
Created January 11, 2025 04:35
​Running k3s on a different disk


sudo ln -s /data2/k3s-rancher-etc/ /etc/rancher
sudo ln -s /data2/k3s/ /run/k3s
sudo ln -s /data2/k3s-kubelet/ /var/lib/kubelet
sudo ln -s /data2/k3s-rancher/ /var/lib/rancher

@0xBigBoss
0xBigBoss / k3s-over-zerotier.sh
Created January 6, 2025 22:53
Connect k3s over a Zerotier network
curl -sfL https://get.k3s.io | sh -s - \
--bind-address=0.0.0.0 \
--flannel-iface=zt12345678 \
--node-ip=10.0.0.2 \
--cluster-init
# from control plane
K3S_URL=https://10.0.0.2:6443
K3S_TOKEN=$(sudo cat /var/lib/rancher/k3s/server/node-token)
@0xBigBoss
0xBigBoss / runnning.md
Last active January 2, 2025 23:43
Building vLLM + PyTorch + Torchvision from source

Run vLLM in Distributed Mode with Ray

Prerequisites

A docker image with the vLLM server installed.

export DOCKER_IMAGE=docker.io/fxnlabs/vllm-openai
# or use the following if you want to use the latest version
@0xBigBoss
0xBigBoss / aa-bundler-rpc.ts
Last active December 31, 2024 20:55
Call the AA Bundler RPC Method
async function callDebugRpcMethod(method: string, params: any[]) {
const response = await fetch("http://localhost:3030/rpc", {
headers: {
"content-type": "application/json",
},
referrer: "http://localhost:3000/",
body: JSON.stringify({
jsonrpc: "2.0",
id: 1,
method: method,
@0xBigBoss
0xBigBoss / environment-history.yml
Created December 30, 2024 23:16
Conda environment for compiling PyTorch and vLLM from source.
name: pytorch
channels:
- https://repo.anaconda.com/pkgs/main
- https://repo.anaconda.com/pkgs/r
dependencies:
- python=3.12.8
- cmake
- ninja
- libjpeg-turbo
- libpng
@0xBigBoss
0xBigBoss / config
Created November 24, 2024 19:36
Ghostty config
macos-option-as-alt = true
keybind = cmd+right=text:\x05
keybind = cmd+left=text:\x01
keybind = alt+left=esc:b
keybind = alt+right=esc:f
@0xBigBoss
0xBigBoss / llama-suggest
Created November 10, 2024 19:01
Wrapper for llama-cli that generates example commands when given a user prompt.
#!/bin/bash
set -eo pipefail
user_input="$*"
if [[ -z "$user_input" ]]; then
user_input="beginner shell commands"
echo "Using default input: $user_input"
fi
@0xBigBoss
0xBigBoss / Simple LLM.md
Last active October 27, 2024 02:06
A simple LLM and tokenizor for demonstrating KV Cache and inference using the transformer architecture(generating text).

Demo a Simple LLM

A simple LLM and tokenizor for demonstrating KV Cache and inference using the transformer architecture(generating text).

Need PyTorch installed.

$ python ./demo_llm.py
Demonstrating KV cache