Skip to content

Instantly share code, notes, and snippets.

View DoddiC's full-sized avatar
👨‍🎓
Hi!

Chidvi Doddi DoddiC

👨‍🎓
Hi!
View GitHub Profile

Prompt Learning

Given examples of desired output, automatically discover the best system prompt for any LLM task.

The Problem

Any codebase that uses LLMs has prompts — for text generation, data transformation, classification, summarization. These prompts are hand-written, manually iterated, and frozen. When requirements shift or quality degrades, a developer tweaks the prompt by hand again. There's no systematic way for agents to self-optimize their own prompts given examples of what good output looks like.

The Idea

@rohitg00
rohitg00 / llm-wiki.md
Last active May 14, 2026 04:04 — forked from karpathy/llm-wiki.md
LLM Wiki v2 — extending Karpathy's LLM Wiki pattern with lessons from building agentmemory

LLM Wiki v2

A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.

This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.

What the original gets right

The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.

@greenstevester
greenstevester / how-to-setup-ollama-on-a-macmini.md
Last active May 13, 2026 01:56
April 2026 TLDR setup for Ollama + Gemma 4 12B on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive

April 2026 TLDR setup for Ollama + Gemma 4 on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive

April 2026 TLDR Setup for Ollama + Gemma 4 on a Mac mini (Apple Silicon)

Prerequisites

  • Mac mini with Apple Silicon (M1/M2/M3/M4/M5)
  • At least 16GB unified memory for Gemma 4 (default 8B)
  • macOS with Homebrew installed
@brandonpollack23
brandonpollack23 / .bashrc
Created April 3, 2026 03:45
Hacker news motd script
# Hacker News MOTD
_hn_output=$(source "$HOME/rcfiles/hacker-news.sh")
if [[ -n "$_hn_output" ]]; then
echo "$_hn_output"
else
echo "⏳ Fetching Hacker News in the background..."
fi
unset _hn_output
#!/usr/bin/env bash
set -euo pipefail
# patch-claude-code.sh — Rebalance Claude Code prompts to fix corner-cutting behavior
#
# What this does:
# Patches the npm-installed @anthropic-ai/claude-code cli.js to rebalance
# system prompt instructions that cause the model to cut corners, simplify
# excessively, and defer complicated work.
#
@benvanik
benvanik / hypothesis.md
Last active April 17, 2026 01:23
Anthropic Thinking Reduction

Extended Thinking Is Load-Bearing for Senior Engineering Workflows

Produced by claude based on my extensive data - if there's any issues, it's because anthropic doesn't let claude think anymore ;) Unfortunately claude deleted my January logs containing a bulk of my work so only summary analysis is available - January was what I expect, Febuary started sliding, and March was a complete and utter loss.

Summary

Quantitative analysis of 17,871 thinking blocks and 234,760 tool calls across 6,852 Claude Code session files reveals that the rollout of thinking content redaction (redact-thinking-2026-02-12) correlates precisely with a measured quality regression in complex, long-session engineering workflows.

@alganet
alganet / c89cc.sh
Last active May 10, 2026 08:33
c89cc.sh - standalone C89/ELF64 compiler in pure portable shell
#!/bin/sh
# ISC License
# Copyright (c) 2026 Alexandre Gomes Gaigalas <alganet@gmail.com>
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
@sc0tfree
sc0tfree / AutoModeSummary.md
Created March 31, 2026 16:34
Claude Code Auto Mode

Claude Code Auto Mode: A Comprehensive Technical Summary

Auto mode replaces the human "approve/deny" permission prompt with an ML classifier that evaluates every tool call before execution. Instead of asking you whether rm -rf node_modules is okay, a second instance of Claude (Sonnet 4.6) reads the conversation transcript and decides in real time.

This document walks through how it works, from the high-level architecture down to the actual code.


Table of Contents

@mandarBadve
mandarBadve / SpinnerVerbs.ts
Created March 31, 2026 16:18
Claude Code Spinner Verbs
export class SpinnerVerbs {
private static readonly DEFAULT_VERBS: string[] = [
'Accomplishing',
'Actioning',
'Actualizing',
'Architecting',
'Baking',
'Beaming',
"Beboppin'",
'Befuddling',
@Houstoten
Houstoten / autoDream.ts
Created March 31, 2026 10:58
Claude Code internals: AutoDream, Buddy companion, prompt cache economics, microcompact
// biome-ignore-all assist/source/organizeImports: ANT-ONLY import markers must not be reordered
// Background memory consolidation. Fires the /dream prompt as a forked
// subagent when time-gate passes AND enough sessions have accumulated.
//
// Gate order (cheapest first):
// 1. Time: hours since lastConsolidatedAt >= minHours (one stat)
// 2. Sessions: transcript count with mtime > lastConsolidatedAt >= minSessions
// 3. Lock: no other process mid-consolidation
//
// State is closure-scoped inside initAutoDream() rather than module-level