Skip to content

Instantly share code, notes, and snippets.

@rinaldifonseca
rinaldifonseca / llm-wiki.md
Created April 13, 2026 01:32 — forked from rohitg00/llm-wiki.md
LLM Wiki v2 — extending Karpathy's LLM Wiki pattern with lessons from building agentmemory

LLM Wiki v2

A pattern for building personal knowledge bases using LLMs. Extended with lessons from building agentmemory, a persistent memory engine for AI coding agents.

This builds on Andrej Karpathy's original LLM Wiki idea file. Everything in the original still applies. This document adds what we learned running the pattern in production: what breaks at scale, what's missing, and what separates a wiki that stays useful from one that rots.

What the original gets right

The core insight is correct: stop re-deriving, start compiling. RAG retrieves and forgets. A wiki accumulates and compounds. The three-layer architecture (raw sources, wiki, schema) works. The operations (ingest, query, lint) cover the basics. If you haven't read the original, start there.

@martinstarkov
martinstarkov / file_dialog.h
Last active April 13, 2026 01:32
C++23 wrapper for nativefiledialog-extended (with GLFW window)
// Replace Window with your own wrapper or directly with GLFW window.
// file_dialog.h:
#pragma once
#include <expected>
#include <optional>
#include <string>
#include <vector>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>CIA Frequency Detector</title>
<style>
body { margin: 0; background: #000; color: #0f0; font-family: monospace; overflow: hidden; }
canvas { display: block; width: 100vw; height: 80vh; image-rendering: pixelated; }
#controls { height: 20vh; padding: 10px; display: flex; flex-direction: column; justify-content: center; align-items: center; border-top: 1px solid #0f0; }
@nlzoymafwh
nlzoymafwh / cFosSpeed Latest Version Instant Access [Lifetime Use].md
Created April 13, 2026 01:31
cFosSpeed Latest Version Instant Access [Lifetime Use]

cFosSpeed is your go-to solution for optimizing Internet speed and reducing latency, making online activities smoother and more efficient. With its intelligent traffic shaping technology, you can enjoy a seamless browsing experience without interruptions.

Key Features

  • Advanced traffic shaping to prioritize important applications and reduce lag.
    • Real-time analysis of Internet traffic to optimize speed.
    • Easy-to-use interface for quick adjustments and settings.
    • Support for various connection types, including DSL, cable, and mobile networks.
    • Built-in ping optimization for faster response times in online gaming.
    • Multilingual support to cater to users around the globe.
@allan-gar2x
allan-gar2x / HYDRA-008-implementation-plan.md
Created April 13, 2026 01:31
HYDRA-008/013: serverConfig auth split — combined implementation + security review

HYDRA-008 / HYDRA-013 — Implementation Plan & Security Review

Issues: #7700 · #7704
Severity: MEDIUM (P2) · Category: Information Disclosure
Date: 2026-04-13


Vulnerability Summary

@surya0920
surya0920 / llm-wiki.md
Created April 13, 2026 01:31 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@forkyau
forkyau / hourly_rainfall_data-2026-04-13|09-15.csv
Created April 13, 2026 01:31
GIST created by python code
id station stationid value unit obstime date
0 流浮山 RF001 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
1 湿地公园 RF002 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
2 水边围 N12 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
3 石岗 RF003 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
4 大美督 RF004 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
5 大埔墟 RF005 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
6 北潭涌 RF006 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
7 滘西洲 RF007 0 mm 2026-04-13T09:15:00+08:00 2026-04-13
8 西贡 N15 0 mm 2026-04-13T09:15:00+08:00 2026-04-13

React Streaming SSR Without Server Components — A Practical Guide

React Server Components가 스트리밍 SSR의 중심처럼 보이지만, 모든 앱에 그 복잡도가 필요한 것은 아니다. 이 글의 요지는 renderToPipeableStream, defer(), 일반적인 Suspense만으로도 서버에서 HTML을 점진적으로 흘려보낼 수 있다는 점이다. 즉, Server Components나 use client 구분 없이도 스트리밍 UX를 만들 수 있다.

스트리밍 SSR이 실제로 의미하는 것

@haremantra
haremantra / llm-wiki.md
Created April 13, 2026 01:30 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

@nlzoymafwh
nlzoymafwh / Centurion Setup Full Version Safe Download [Lifetime Use].md
Created April 13, 2026 01:30
Centurion Setup Full Version Safe Download [Lifetime Use]

Centurion Setup makes software installation seamless and hassle-free. With its user-friendly interface, you can set up your applications in no time, ensuring you spend less time on installation and more time on what matters.

Key Features

  • Intuitive setup process that simplifies software installation.
    • Supports a wide range of applications for versatile use.
    • Customizable installation options to meet your specific needs.
    • Offline installer available for easy setup without internet connectivity.
    • Safe and secure download with regular updates.

Why Choose This Version