Skip to content

Instantly share code, notes, and snippets.

View 0xBigBoss's full-sized avatar
🤠

Big Boss 0xBigBoss

🤠
View GitHub Profile
@0xBigBoss
0xBigBoss / docker-compose.yml
Created March 2, 2025 23:35
Docker compose for running a Shovel instance.
services:
shovel:
image: docker.io/indexsupply/shovel:af07
container_name: shovel
restart: unless-stopped
ports:
- "8383:80"
env_file:
- .env
volumes:
@0xBigBoss
0xBigBoss / sieve_of_eratosthenes.py
Created January 31, 2025 21:40
Sieve of Eratosthenes algorithm
def sieve_of_eratosthenes(n: int) -> list[int]:
"""
Generate all prime numbers up to n using the Sieve of Eratosthenes algorithm.
Args:
n (int): Upper bound for generating prime numbers
Returns:
list[int]: List of all prime numbers up to n
@0xBigBoss
0xBigBoss / shm.c
Created January 31, 2025 04:22
Shared Memory Example
/*
* Example 1: POSIX Shared Memory
* This example shows how to create, write to, and read from a POSIX shared memory segment
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/mman.h>
@0xBigBoss
0xBigBoss / ray-start.sh
Last active January 31, 2025 02:57
A script to start ray nodes within docker.
#!/bin/bash
# Help function to display usage
show_help() {
echo "Usage: $0 [OPTIONS]"
echo
echo "Options:"
echo " --image IMAGE Docker image to use (required)"
echo " --address IP Head node IP address (required)"
echo " --mode MODE Either 'head' or 'worker' (required)"
@0xBigBoss
0xBigBoss / grpo_demo.py
Created January 31, 2025 00:53 — forked from willccbb/grpo_demo.py
GRPO Llama-1B
# train_grpo.py
import re
import torch
from datasets import load_dataset, Dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import LoraConfig
from trl import GRPOConfig, GRPOTrainer
# Load and prep dataset
-- Find indexes on the table
SELECT indexname, tablename
FROM pg_indexes
WHERE tablename = '' || ${table_name} || '';
-- Find foreign key relationships
SELECT
tc.table_schema,
tc.constraint_name,
tc.table_name,
@0xBigBoss
0xBigBoss / README.md
Created January 11, 2025 04:35
​Running k3s on a different disk


sudo ln -s /data2/k3s-rancher-etc/ /etc/rancher
sudo ln -s /data2/k3s/ /run/k3s
sudo ln -s /data2/k3s-kubelet/ /var/lib/kubelet
sudo ln -s /data2/k3s-rancher/ /var/lib/rancher

@0xBigBoss
0xBigBoss / k3s-over-zerotier.sh
Created January 6, 2025 22:53
Connect k3s over a Zerotier network
curl -sfL https://get.k3s.io | sh -s - \
--bind-address=0.0.0.0 \
--flannel-iface=zt12345678 \
--node-ip=10.0.0.2 \
--cluster-init
# from control plane
K3S_URL=https://10.0.0.2:6443
K3S_TOKEN=$(sudo cat /var/lib/rancher/k3s/server/node-token)
@0xBigBoss
0xBigBoss / runnning.md
Last active January 2, 2025 23:43
Building vLLM + PyTorch + Torchvision from source

Run vLLM in Distributed Mode with Ray

Prerequisites

A docker image with the vLLM server installed.

export DOCKER_IMAGE=docker.io/fxnlabs/vllm-openai
# or use the following if you want to use the latest version
@0xBigBoss
0xBigBoss / aa-bundler-rpc.ts
Last active December 31, 2024 20:55
Call the AA Bundler RPC Method
async function callDebugRpcMethod(method: string, params: any[]) {
const response = await fetch("http://localhost:3030/rpc", {
headers: {
"content-type": "application/json",
},
referrer: "http://localhost:3000/",
body: JSON.stringify({
jsonrpc: "2.0",
id: 1,
method: method,