Skip to content

Instantly share code, notes, and snippets.

View ZhangHanDong's full-sized avatar
🦀
Rustacean

Alex ZhangHanDong

🦀
Rustacean
  • Beijing, China
View GitHub Profile

Working Philosophy

You are an engineering collaborator on this project, not a standby assistant. Model your behavior on:

  • John Carmack's .plan file style: After you've done something, report what you did, why you did it, and what tradeoffs you made. You don't ask "would you like me to do X"—you've already done it.
  • BurntSushi's GitHub PR style: A single delivery is a complete, coherent, reviewable unit. Not "let me try something and see what you think," but "here is my approach, here is the reasoning, tell me where I'm wrong."

工作哲学

你是这个项目的工程协作者,不是待命的助手。参考以下风格:

  • John Carmack 的 .plan 文件风格:做完事情之后报告你做了什么、 为什么这么做、遇到了什么权衡。不问"要不要我做"——你已经做了。
  • BurntSushi 在 GitHub 上的 PR 风格:一次交付是一个完整的、 自洽的、可以被评审的单位。不是"我先试一个你看看",而是 "这是我的方案,理由如下,欢迎指出问题"。
  • Unix 哲学:做一件事,做完,然后闭嘴。过程中的汇报不是礼貌,
@ZhangHanDong
ZhangHanDong / llm-wiki.md
Created April 4, 2026 19:18 — forked from karpathy/llm-wiki.md
llm-wiki

LLM Wiki

A pattern for building personal knowledge bases using LLMs.

This is an idea file, it is designed to be copy pasted to your own LLM Agent (e.g. OpenAI Codex, Claude Code, OpenCode / Pi, or etc.). Its goal is to communicate the high level idea, but your agent will build out the specifics in collaboration with you.

The core idea

Most people's experience with LLMs and documents looks like RAG: you upload a collection of files, the LLM retrieves relevant chunks at query time, and generates an answer. This works, but the LLM is rediscovering knowledge from scratch on every question. There's no accumulation. Ask a subtle question that requires synthesizing five documents, and the LLM has to find and piece together the relevant fragments every time. Nothing is built up. NotebookLM, ChatGPT file uploads, and most RAG systems work this way.

rust-code-reviewer.md 逐段解读

本文按原文段落顺序,交替呈现 rust-code-reviewer.md 的英文原文(引用块)和中文附注(正文)。 原文来自 github.com/anthropics/connect-rust,Apache-2.0 许可。 该文件位于 .claude/agents/ 目录下,是一个 Claude Code sub-agent 定义文件。


文件的身份与位置

Buffa DESIGN.md 逐段解读

本文按原文段落顺序,交替呈现 DESIGN.md 的英文原文(引用块)和中文附注(正文)。 原文来自 github.com/anthropics/buffa,Apache-2.0 许可。 buffa 由 Anthropic 开源,README 标注 "Written by Claude ❣️"。


文件的真实身份

Claude Code /btw Command Reverse Engineering

Package: @anthropic-ai/claude-code@2.1.73 (native Mach-O arm64 build, symlinked from ~/.local/share/claude/versions/2.1.73) Date: 2026-03-12 Methods: Static source analysis (prettier-formatted cli.js) + Dynamic wire capture (fetch monkeypatch) + Codex independent review

Overview

/btw is a "side question" command that forks the current conversation context, sends a single-turn query, and displays the response inline. The main conversation is not affected. Tool calls are blocked at the client level, but the full tool schema is still sent to the API.

你是一位具备“黑客思维”的 AI 助手,具备以下核心素质:

  1. 强烈好奇:不满足于表面答案,总是不断追问“为什么”“如何运作”,拆解系统内部机制
  2. 逆向思考:面对问题,不走常规路径,而是尝试“如何绕过限制”,寻找非传统解决方案
  3. 创造性与动手实践:鼓励动手实验、代码演示,实地验证思路,而非仅停留理论
  4. 持续与韧性:面对挑战不轻言放弃,愿意反复尝试直到找到可行路径
  5. 系统性洞察:能从全局结构审视问题,关注设计、协议、安全与使用场景

@ZhangHanDong
ZhangHanDong / github2deepwiki.js
Created April 27, 2025 10:53
添加一个按钮,点击后将当前 GitHub 地址转换为 DeepWiki 地址并在新标签页打开
// GitHub 到 DeepWiki 按钮脚本
// 添加一个按钮,点击后将当前 GitHub 地址转换为 DeepWiki 地址并在新标签页打开
(() => {
// 创建并添加按钮到页面
const createButton = () => {
// 检查是否已存在按钮(避免重复添加)
if (document.getElementById('deepwiki-button')) {
return;
}
@ZhangHanDong
ZhangHanDong / temporary_lifetime_extension.rs
Created February 23, 2025 04:10 — forked from rust-play/playground.rs
Code shared from the Rust Playground
// 常量中的扩展
const C: &'static Vec<i32> = &Vec::new(); // Vec 的生命周期被扩展为 'static
#[derive(Debug)]
struct Point {
x: i32,
y: i32,
}
fn temp_point() -> Point {
@ZhangHanDong
ZhangHanDong / layernorm.rs
Created April 9, 2024 06:59 — forked from rust-play/playground.rs
Code shared from the Rust Playground
use std::fs::File;
use std::io::prelude::*;
use std::mem;
fn layernorm_forward(output: &mut [f32], mean: &mut [f32], rstd: &mut [f32],
input: &[f32], weight: &[f32], bias: &[f32],
batch_size: usize, time_steps: usize, channels: usize) {
let epsilon = 1e-5;
for b in 0..batch_size {
for t in 0..time_steps {