Skip to content

Instantly share code, notes, and snippets.

View imaurer's full-sized avatar

Ian Maurer imaurer

View GitHub Profile
@imaurer
imaurer / Unsloth Gemma 4 MOE Setup Guide.md
Created April 14, 2026 23:26
This guide covers running **Gemma 4 26B MOE** (Mixture of Experts) locally on an Apple Silicon Mac using llama.cpp with memory-mapped files. The MOE architecture activates only 8 of 128 experts per token, making it incredibly efficient.

Unsloth Gemma 4 MOE Setup Guide

Running a 26B parameter model with only 6GB RAM using mmap

Overview

This guide covers running Gemma 4 26B MOE (Mixture of Experts) locally on an Apple Silicon Mac using llama.cpp with memory-mapped files. The MOE architecture activates only 8 of 128 experts per token, making it incredibly efficient.

Spec Value
@imaurer
imaurer / karpathy-no-priors.md
Created March 21, 2026 18:29
How Software Gets Made Now — Notes from Karpathy + the Auto Research Movement (March 2026)

A synthesis of Andrej Karpathy's No Priors interview (March 2026), the auto research ecosystem, and what it all means for how we build things.


The Shift: Manifesting, Not Coding

[0:00–6:36]

Karpathy says he hasn't typed a line of code since December 2025. Not because he stopped working — he works 16 hours a day — but because the verb changed:

@imaurer
imaurer / claude-code-tools.md
Created January 10, 2026 10:15 — forked from wong2/claude-code-tools.md
Tools and system prompt of Claude Code

Task

Launch a new agent that has access to the following tools: Bash, Glob, Grep, LS, exit_plan_mode, Read, Edit, MultiEdit, Write, NotebookRead, NotebookEdit, WebFetch, TodoRead, TodoWrite, WebSearch. When you are searching for a keyword or file and are not confident that you will find the right match in the first few tries, use the Agent tool to perform the search for you.

When to use the Agent tool:

  • If you are searching for a keyword like "config" or "logger", or for questions like "which file does X?", the Agent tool is strongly recommended

When NOT to use the Agent tool:

  • If you want to read a specific file path, use the Read or Glob tool instead of the Agent tool, to find the match more quickly
  • If you are searching for a specific class definition like "class Foo", use the Glob tool instead, to find the match more quickly
  • If you are searching for code within a specific file or set of 2-3 files, use the Read tool instead of the Agent tool, to find the match more quickly
@imaurer
imaurer / create-cli-skill.md
Created December 31, 2025 20:00
Use https://clig.dev/ to improve @steipete’s CLI skill prompt

Determining the Minimum Supported Pydantic Version (Binary Cut)

This note documents how we determined the lowest Pydantic 2.x version that works across our supported Python versions (3.10–3.14) using uv and pytest. It also records the final requirement change made to pyproject.toml.

Objective

  • Identify the minimum Pydantic version that passes our test suite across Python 3.10, 3.11, 3.12, 3.13, and 3.14.
  • Update our dependency constraint accordingly.
@imaurer
imaurer / test-docs.py
Created September 3, 2025 21:30
test biomcp docs
# File: run.py
#!/usr/bin/env python3
"""
Working demonstration of the BioMCP API with actual function calls.
This shows the corrected API in action.
"""
import asyncio
from biomcp.variants.search import (

GitHub Issue: Enable Schema-Formatted Output for Tool-Using Chains

1. Problem:

Currently, the llm CLI's --schema option applies to the direct output of a single Language Model (LLM) call. When tools (via --tool or --functions) are used, the LLM engages in a multi-step chain (e.g., ReAct pattern) where intermediate outputs are tool call requests or textual reasoning. There's no direct way to specify that the final, user-visible result of such a multi-step, tool-using chain should conform to a user-defined schema. The existing --schema option doesn't automatically apply to the culmination of this chain.

2. Alternatives Considered:

  • A. New CLI Option: Introducing a distinct option (e.g., --final-schema or --output-schema) specifically for specifying the schema of the final output after a tool chain. This would keep the existing --schema behavior for direct, single-turn schema output and make the post-chain formatting explicit.
  • **B. Overload Existing --schema (Implic
@imaurer
imaurer / llm_mcp_demo.sh
Last active May 28, 2025 14:06
llm-mcp example setup flow
# Demo output - assume llm 0.26 installed and in path
#
# llm-mcp repo:
# https://github.com/genomoncology/llm-mcp
#
# includes:
# - Desktop Command (local MCP): https://desktopcommander.app/
# - Git MCP for simonw/llm: https://gitmcp.io/simonw/llm (auth-less remote example)
#
# version 0.0.2
@imaurer
imaurer / clean_text.py
Created May 21, 2025 14:01
Fix ChatGPT hyphens and spaces
#!/usr/bin/env -S uv --quiet run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "typer",
# ]
# ///
"""
Clean Text: Unicode Hyphen and Space Normalizer

You

What does this change https://grants.nih.gov/grants/guide/notice-files/NOT-OD-25-047.html


ChatGPT

Bottom-line: NIH’s new Public Access Policy (NOT-OD-25-047) scraps the 12-month waiting period and makes every NIH-funded paper publicly available immediately at publication, starting with manuscripts accepted on or after December 31 2025.