Skip to content

Instantly share code, notes, and snippets.

View alexlib's full-sized avatar
:octocat:
Working

Alex Liberzon alexlib

:octocat:
Working
View GitHub Profile
@alexlib
alexlib / rustimport_jupyter_openpiv_rust.ipynb
Last active April 9, 2026 22:29
rustimport_jupyter_openpiv_rust.ipynb
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@alexlib
alexlib / sync_forks_and_report.sh
Created April 8, 2026 19:06
github action that allows me to sync all the forks and store a report in markdown file, generated by github copilot
#!/usr/bin/env bash
set -euo pipefail
# Config
USER_LOGIN="${1:-alexlib}"
REPORT_FILE="${2:-fork-sync-report.md}"
SYNC="${SYNC:-1}" # SYNC=1 to sync clean forks, SYNC=0 to only report
PER_PAGE="${PER_PAGE:-100}" # pagination size
tmp_json="$(mktemp)"
@alexlib
alexlib / llm-wiki.md
Created April 4, 2026 17:39 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@alexlib
alexlib / llamaCPP_Compilation_Execution_low-resources.md
Created April 3, 2026 14:13 — forked from 0ut0flin3/llamaCPP_Compilation_Execution_low-resources.md
llama.cpp on 2-core CPU + 8GB DDR2: My Full CPU-only Guide

Below are the generic details and resources of the machine I used:

  • Operating System: Linux Mint 22.3 - MATE 64-bit (safe-graphics / boot with 'nomodeset') - Linux Kernel: 6.14.0-37-generic
  • GPU: not relevant / not used
  • CPU: 2 cores – Intel® Core™2 Duo CPU E8400 @ 3.00GHz × 2
  • RAM: 8 GB (DDR2)

Given my recent personal good experience in running inference with CPU-only (no GPU) on local models up to 2B-4B parameters with more than acceptable speed and performance on really limited hardware and very little RAM, I decided to share the instructions and commands I used to compile llama.cpp on my machine and the arguments I currently use to launch llama.

@alexlib
alexlib / research_review.md
Created April 3, 2026 11:12
Based on Simons Writing Guide

📝 Scientific Manuscript Revision Checklist

Based on the Simons "Musings on Writing" Framework

🛠 Phase 1: The Physical Intervention

  • Physical Printout: Have you printed this draft and read it on paper? (Simons Rule: Repetition and flow issues are invisible on a screen).
  • The Reverse Outline: Have you written a single phrase summarizing the point of every paragraph?
    • If you can't summarize it in one phrase, the paragraph needs revision.

🏛 Phase 2: Structure & Organization

  • The Opening: Does the paper establish a controversy, mystery, or conundrum? (Avoid "laundry list" introductions).
@alexlib
alexlib / siv_marimo.py
Created February 17, 2026 21:40
smoke image velocimetry - marimo notebook with Python code, not tested
# /// script
# requires-python = ">=3.14"
# dependencies = [
# "marimo>=0.19.9",
# "matplotlib==3.10.8",
# "numpy==2.4.2",
# "pydantic-ai-slim==1.57.0",
# "scipy==1.17.0",
# ]
# ///
@alexlib
alexlib / enhance_piv_image_nb.py
Last active January 31, 2026 11:47
marimo notebook showing PIV image enhancement filter
import marimo
__generated_with = "0.19.6"
app = marimo.App(width="medium", auto_download=["ipynb"])
with app.setup:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import marimo as mo
@alexlib
alexlib / opencv_sphere_tracers_with_trackpy_nb.py
Created January 17, 2026 20:51
Use OpenCV to detect a large sphere and multiple flow tracers around in a thin slice around a sphere. Part of the Turbulence Structure Laboratory project on settling of spheres through stratified interfaces. An OpenPTV competitor :) uses marimo (read about it, no need to thank me, thank them)
import marimo
__generated_with = "0.19.4"
app = marimo.App(width="medium", auto_download=["ipynb"])
@app.cell
def _():
import marimo as mo
import cv2
@alexlib
alexlib / marimo_opencv_matplotlib_detect_sphere_tracers_batch_run_nb.py
Created December 21, 2025 19:54
marimo notebook that uses OpenCV to detect large sphere and flow tracers and present using matplotlib. Also creates a batch run.
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "google-genai==1.56.0",
# "matplotlib==3.10.8",
# "nbformat==5.10.4",
# "numpy==2.2.6",
# "opencv-python==4.12.0.88",
# "pandas==2.3.3",
# ]
@alexlib
alexlib / untitled14.ipynb
Last active October 2, 2025 12:13
Self-tutoring demo of reservoir computing using the list sorting algorithm
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.