Skip to content

Instantly share code, notes, and snippets.

View imaurer's full-sized avatar

Ian Maurer imaurer

View GitHub Profile
@imaurer
imaurer / pb.py
Last active February 9, 2025 22:27
Configurable script for filtering files found in a git repo and copying content to clipboard
#!/usr/bin/env -S uv --quiet run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "click",
# "pyperclip",
# "pathspec",
# "rich",
# ]
# ///
@imaurer
imaurer / ai_patching_deep_research.md
Created February 3, 2025 14:45
OpenAI Deep Research on Code Patching

LLM-Based Coding Assistants and Patch Application

Large language model (LLM) coding assistants like Sourcegraph Cody, Aider, and Tabby help developers generate and apply code changes. This report examines how these open-source tools prompt LLMs to produce patches, integrate the changes into code, handle common issues, verify results, and what challenges remain.

Prompting Strategies

Structured Prompts for Code Edits – These assistants carefully craft prompts so the LLM knows exactly how to output changes. For example, Aider uses specialized edit formats: it can ask the LLM for a full file rewrite or a diff. Aider often defaults to a diff format, where the LLM is told to return only the changed parts of files using a syntax similar to a unified diff or marked “search/replace” blocks. This reduces token usage and focuses the LLM on the edits. The prompt includes instructions like “produce changes in this format” with file paths and code fences, so the model returns patches instead

#!/usr/bin/env -S uv --quiet run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "pyahocorasick",
# ]
# ///
"""
Icon Slicer Script
<!-- `https://api.substack.com/feed/podcast/160495909/bdd3acc7cd18a69c68ad250654009252.mp3` -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Podcast MP3 Player</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;

You

What does this change https://grants.nih.gov/grants/guide/notice-files/NOT-OD-25-047.html


ChatGPT

Bottom-line: NIH’s new Public Access Policy (NOT-OD-25-047) scraps the 12-month waiting period and makes every NIH-funded paper publicly available immediately at publication, starting with manuscripts accepted on or after December 31 2025.

@imaurer
imaurer / clean_text.py
Created May 21, 2025 14:01
Fix ChatGPT hyphens and spaces
#!/usr/bin/env -S uv --quiet run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "typer",
# ]
# ///
"""
Clean Text: Unicode Hyphen and Space Normalizer
@imaurer
imaurer / llm_mcp_demo.sh
Last active May 28, 2025 14:06
llm-mcp example setup flow
# Demo output - assume llm 0.26 installed and in path
#
# llm-mcp repo:
# https://github.com/genomoncology/llm-mcp
#
# includes:
# - Desktop Command (local MCP): https://desktopcommander.app/
# - Git MCP for simonw/llm: https://gitmcp.io/simonw/llm (auth-less remote example)
#
# version 0.0.2

GitHub Issue: Enable Schema-Formatted Output for Tool-Using Chains

1. Problem:

Currently, the llm CLI's --schema option applies to the direct output of a single Language Model (LLM) call. When tools (via --tool or --functions) are used, the LLM engages in a multi-step chain (e.g., ReAct pattern) where intermediate outputs are tool call requests or textual reasoning. There's no direct way to specify that the final, user-visible result of such a multi-step, tool-using chain should conform to a user-defined schema. The existing --schema option doesn't automatically apply to the culmination of this chain.

2. Alternatives Considered:

  • A. New CLI Option: Introducing a distinct option (e.g., --final-schema or --output-schema) specifically for specifying the schema of the final output after a tool chain. This would keep the existing --schema behavior for direct, single-turn schema output and make the post-chain formatting explicit.
  • **B. Overload Existing --schema (Implic