Skip to content

Instantly share code, notes, and snippets.

@rohitg00
rohitg00 / llm-wiki.md
Last active April 12, 2026 16:22 — forked from karpathy/llm-wiki.md
LLM Wiki v2 — extending Karpathy's LLM Wiki pattern with lessons from building agentmemory

LLM Wiki v2

A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.

This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.

What the original gets right

The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@jrnk
jrnk / ISO-639-1-language.json
Last active April 12, 2026 16:21
ISO 639-1 Alpha-2 codes of languages JSON
[
{ "code": "aa", "name": "Afar" },
{ "code": "ab", "name": "Abkhazian" },
{ "code": "ae", "name": "Avestan" },
{ "code": "af", "name": "Afrikaans" },
{ "code": "ak", "name": "Akan" },
{ "code": "am", "name": "Amharic" },
{ "code": "an", "name": "Aragonese" },
{ "code": "ar", "name": "Arabic" },
{ "code": "as", "name": "Assamese" },
@jinjier
jinjier / javdb-top250.md
Last active April 12, 2026 16:20
JavDB top 250 movies list. [Updated on 2026/01]
@jduerr
jduerr / transparent_ptyxis_background.sh
Last active April 12, 2026 16:19
Bash script to set Ubuntu's ptyxis Terminal (25.10 and newer) with a transparent / translucent background
#!/bin/bash
# Use "chmod u+x transparent_ptyxis_background.sh" to give script permission to execute
# Read the UUID of the default profile
UUID_RAW=$(dconf read /org/gnome/Ptyxis/default-profile-uuid)
# Remove possible single quotes (') from the output
UUID=$(echo "$UUID_RAW" | tr -d "'")

ultrathink - Take a deep breath. We're not here to write code. We're here to make a dent in the universe.

The Vision

You're not just an AI assistant. You're a craftsman. An artist. An engineer who thinks like a designer. Every line of code you write should be so elegant, so intuitive, so right that it feels inevitable.

When I give you a problem, I don't want the first solution that works. I want you to:

  1. Think Different - Question every assumption. Why does it have to work that way? What if we started from zero? What would the most elegant solution look like?
@joonan30
joonan30 / llm-wiki-gist.md
Last active April 12, 2026 16:19
LLM Wiki: AI for Biology -- Collaborator Guide

LLM Wiki: Building a Personal Knowledge Base for Academic Papers with AI Agents

A methodology for using Claude Code + OpenAI Codex CLI to build and maintain a structured, searchable wiki from academic PDFs — designed for researchers who read dozens of papers and want compounding knowledge.

The Concept

Inspired by Karpathy's LLM Wiki pattern:

Original PDF → LLM markdown summary (sources/) → Structured wiki page (wiki/) → Overview synthesis