Skip to content

Instantly share code, notes, and snippets.

@up1
Last active April 15, 2026 16:04
Show Gist options
  • Select an option

  • Save up1/899e5062a5fd3ac142d62d924dfd4f2e to your computer and use it in GitHub Desktop.

Select an option

Save up1/899e5062a5fd3ac142d62d924dfd4f2e to your computer and use it in GitHub Desktop.
Demo with LLM Wiki
# ติดตั้ง และ ทำการ setup ผ่าน wizard แบบ step-by-step
$pip install obsidian-llm-wiki
$olw setup
╭────────────────────────────────────────────────────╮
│ obsidian-llm-wiki v0.2.0 · first run setup │
╰────────────────────────────────────────────────────╯
Step 1/4 Ollama connection
Warning: Could not reach http://localhost:11434
You can still configure manually — run olw doctor later.
Ollama URL (http://localhost:11434):
Step 2/4 Fast model (analysis & routing · 3–8B recommended)
(e.g. gemma4:e4b, llama3.2:3b, qwen2.5:14b)
Model name (gemma4:e4b):
Step 3/4 Heavy model (article writing · 7–14B recommended)
(e.g. gemma4:e4b, llama3.2:3b, qwen2.5:14b)
Model name (qwen2.5:14b):
Step 4/4 Default vault path (press Enter to skip)
Vault path (): ~/mywiki
╭──────────────────────────────────────────────────────╮
│ ✓ Setup complete │
│ │
│ Fast model: gemma4:e4b │
│ Heavy model: qwen2.5:14b │
│ Ollama: http://localhost:11434 │
│ Vault: ~/mywiki │
│ │
│ Next steps: │
│ olw init /mywiki │
│ olw run (or: olw ingest --all && olw compile) │
╰──────────────────────────────────────────────────────╯
# ทำการ initial project สำหรับจัดการ wiki ขึ้นมา
$olw init <path-to-llm-wiki>
Created fresh vault structure
INFO Initialised git repo at
Vault initialised: ~/llm-wiki
Next steps:
1. Drop .md notes into raw/
2. Run olw run (ingest + compile + lint in one step)
3. Review drafts: olw review
# โครงสร้างของ project เป็นดังนี้
├── raw
├── vault-schema.md
├── wiki
│   ├── INDEX.md
│   └── sources
└── wiki.toml
# ทำการ ingest ข้อมูลจาก /raw folder เพื่อทำการแปลงไปยัง folder /wiki
$olw ingest raw/{{filename}}.md
INFO Note Harness engineering for coding agent users.md split into 3 chunks
for analysis (17221 chars, chunk_size=8192)
INFO Analyzing Harness engineering for coding agent users.md [part 1/3] …
INFO Analyzed Harness engineering for coding agent users.md [part 1/3] (10.1s)
INFO Analyzing Harness engineering for coding agent users.md [part 2/3] …
INFO Analyzed Harness engineering for coding agent users.md [part 2/3] (3.8s)
INFO Analyzing Harness engineering for coding agent users.md [part 3/3] …
INFO Analyzed Harness engineering for coding agent users.md [part 3/3] (2.2s)
INFO Source summary written: Harness Engineering For Coding Agent Users.md
INFO Ingested: Harness engineering for coding agent users.md (quality=medium,
concepts=['Harness definition (Agent = Model + Harness)', 'Outer harness
goals (increase correctness, provide feedback loop, reduce toil)',
'Computational vs Inferential guidance/sensors'])
Harness engineering for coding agent users.md ━━━━━━━━━━━━━━━━━━━━━━ 1/1 0:00:16
Done. Ingested: 1 Skipped: 0 Failed: 0
# Watch mode ถ้ามีการเปลี่ยนแปลง จะทำการแปลงให้ทันที
$olw watch
# Run all processes
$olw run
wiki
├── INDEX.md
├── log.md
└── sources
└── Harness Engineering For Coding Agent Users.md
2 directories, 3 files
$olw query "what is harness engineer"
Sources: Harness Engineering For Coding Agent Users
A harness refers to everything surrounding an AI agent, excluding the model
itself, which can be tailored to specific contexts like coding agents. A
well-built outer harness aims to increase correctness, provide self-correction
feedback loops, and reduce human review toil. It integrates both computational
(fast, deterministic) and inferential (slower, semantic) guidance and feedback
mechanisms, steered by a human. Key concepts related to this include [[Harness
definition (Agent = Model + Harness)]], [[Outer harness goals (increase
correctness, provide feedback loop, reduce toil)]], [[Computational vs Inferential
guidance/sensors]], [[Feedforward vs Feedback controls]], [[Steering loop (human
iteration)]], [[Change lifecycle timing (shift-left principle)]], [[Continuous
drift and health sensors]], and [[Regulation categories (Maintainability
harness)]].
---
source_title: "Harness engineering for coding agent users"
source_url: "https://martinfowler.com/articles/harness-engineering.html"
captured: "2026-04-15T22:13:05+07:00"
---
The term harness has emerged as a shorthand to mean everything in an AI agent except the model itself - [Agent = Model + Harness](https://blog.langchain.com/the-anatomy-of-an-agent-harness/). That is a very wide definition, and therefore worth narrowing down for common categories of agents. I want to take the liberty here of defining its meaning in the bounded context of using a coding agent. In coding agents, part of the harness is already built in (e.g. via the system prompt, or the chosen code retrieval mechanism, or even a [sophisticated orchestration system](https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents)). But coding agents also provide us, their users, with many features to build an outer harness specifically for our use case and system.
[models]
fast = "gemma4:e4b"
heavy = "qwen2.5:14b"
# Optional: set heavy = fast to use a single model for everything
[ollama]
url = "http://localhost:11434"
timeout = 600
fast_ctx = 16384 # context window for fast model (tokens)
heavy_ctx = 32768 # context window for heavy model (tokens)
[pipeline]
auto_approve = false
auto_commit = true
auto_maintain = false
watch_debounce = 3.0
max_concepts_per_source = 8
ingest_parallel = false # true = parallel chunks (needs OLLAMA_NUM_PARALLEL>=4)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment