Skip to content

Instantly share code, notes, and snippets.

View v5tech's full-sized avatar
🎯
Focusing

v5tech

🎯
Focusing
  • Xi'an China
  • 08:35 (UTC +08:00)
View GitHub Profile
@automata
automata / bashrc
Created July 14, 2025 18:00
Kimi K2 + Claude Code
kimi() {
export ANTHROPIC_BASE_URL="https://api.moonshot.ai/anthropic"
export ANTHROPIC_AUTH_TOKEN="YOUR_KIMI_API_KEY"
claude $@
}
@WolframRavenwolf
WolframRavenwolf / HOWTO.md
Last active February 17, 2026 09:20
HOWTO: Use Qwen3-Coder (or any other LLM) with Claude Code (via LiteLLM)

Here's a simple way for Claude Code users to switch from the costly Claude models to the newly released SOTA open-source/weights coding model, Qwen3-Coder, via OpenRouter using LiteLLM on your local machine.

This process is quite universal and can be easily adapted to suit your needs. Feel free to explore other models (including local ones) as well as different providers and coding agents.

I'm sharing what works for me. This gu

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.