Skip to content

Instantly share code, notes, and snippets.

View lguzzon's full-sized avatar
Quality Is Free - Getting There Isn't

Luca Guzzon lguzzon

Quality Is Free - Getting There Isn't
View GitHub Profile
@lguzzon
lguzzon / ccsettings.md
Created February 1, 2026 07:44 — forked from ivanfioravanti/ccsettings.md
Claude Code with various models

Using Claude Code with various AI providers

Create various settings file I have one file for each provider, all in ~/.claude

  • KIMI K2.5: kimi_settings.json
{
  "env": {
    "ANTHROPIC_BASE_URL": "https://api.moonshot.ai/anthropic",
@lguzzon
lguzzon / README.md
Created February 5, 2026 19:20 — forked from ChristopherA/README.md
Self-Improving Claude Code: A bootstrap seed prompt that evolves into a sophisticated configuration system

Self-Improving Claude Code: A Bootstrap Seed

The Hypothesis

A single prompt (~1400 tokens), placed in a project's .claude/CLAUDE.md, can bootstrap a Claude Code instance into a self-improving system — one that captures learnings, extracts patterns, evolves its own configuration, and gets meaningfully better at helping its user with each session.

No pre-built infrastructure required. No user-level config. No hooks, skills, templates, or elaborate folder hierarchies. Just a seed and the affordances Claude Code already provides.

Background

@lguzzon
lguzzon / llm-wiki.md
Created April 14, 2026 08:58 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.