Skip to content

Instantly share code, notes, and snippets.

Would you use a backend where you just define schema, access policy, and functions?

Basically something like making smart contracts on EVM, but instead they run on a hyperscaler, and have regular backend fundamentals.

Here's a mock frenchie made me, was thinking something like this:

schema User {
  email: string @private(owner)
  name: string @public
@sssemil
sssemil / cli.py
Created January 27, 2026 14:33
Simple CLI for PersonaPlex - talk and listen via terminal
#!/usr/bin/env python3
"""
Simple CLI for PersonaPlex - talk and listen via terminal.
"""
import argparse
import os
import sys
import time
import threading
@sssemil
sssemil / Auto-Authorize Claude OAuth (claude.ai)-0.0.user.js
Created January 2, 2026 15:06
Auto-Authorize Claude OAuth (claude.ai)
// ==UserScript==
// @name Auto-Authorize Claude OAuth (claude.ai)
// @match https://claude.ai/oauth/authorize*
// @run-at document-end
// ==/UserScript==
(() => {
const clickAuthorize = () => {
const btn = [...document.querySelectorAll("button")]
.find(b => /authorize/i.test(b.textContent || ""));
if (btn) btn.click();
@sssemil
sssemil / sandbox-run
Created December 9, 2025 22:32
Run whatever in a sandbox via firejail with a bit more convinience
#!/usr/bin/env bash
set -e
usage() {
echo "usage: $0 <command> -a <writable_path> [-a <writable_path>...]"
exit 1
}
if [ "$#" -lt 3 ]; then
usage
@sssemil
sssemil / 4090_PCIe_test.md
Created March 5, 2024 16:13
Difference in inference speed for SDXL on a 4090 with 4 vs 16 PCIe 4.0 lanes

Summary

txt2img sdxl 1024x1024

65.417351÷65.623950=0.996851774

img2img sdxl 1024x1024 (input 1024x1024 image)

54.623950÷55.738662=0.980001099

@sssemil
sssemil / depth_cam.py
Created November 15, 2023 11:26
Simple depth from monocular cam demo
import numpy as np
import torch
import cv2
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
from transformers import DPTForDepthEstimation, DPTImageProcessor
def depth_estimation(model, feature_extractor, image):
inputs = feature_extractor(images=image, return_tensors="pt").to("cuda")
@sssemil
sssemil / minifier_launcher.py
Created November 14, 2023 06:17
Minified repo
from math import inf
import torch
from torch import tensor, device
import torch.fx as fx
import torch._dynamo
from torch._dynamo.testing import rand_strided
from torch._dynamo.debug_utils import run_fwd_maybe_bwd
import torch._dynamo.config
@sssemil
sssemil / zsh_setup.sh
Last active October 28, 2023 18:03
Basic zsh setup
#!/bin/bash
set -e
# Check if zsh is installed
if ! command -v zsh &> /dev/null; then
echo "zsh is not installed. Please install zsh first."
exit 1
fi
@sssemil
sssemil / deep_cam.py
Created October 22, 2023 17:46
Cam to depth stream
import numpy as np
import torch
import cv2
from PIL import Image
from transformers import DPTForDepthEstimation, DPTImageProcessor
model = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas", low_cpu_mem_usage=True).to("cuda")
feature_extractor = DPTImageProcessor.from_pretrained("Intel/dpt-hybrid-midas")
# Start capturing video from the first camera device

How to get this

Edit init.rc as following:

on early-init
    # Prepare early debugfs
    mount debugfs none /sys/kernel/debug
    chmod 0755 /sys/kernel/debug/tracing

    # Enable i2c tracer