Two ambitious open-source projects tackling the same fundamental problem: how do you coordinate multiple AI coding agents to work together effectively?
| Aspect | Gas Town | Swarm-Tools |
|---|---|---|
| Author | Steve Yegge | Joel Hooks |
| Language | Go | TypeScript/Bun |
| Target Platform | Claude Code | OpenCode |
| Philosophy | "Work persists in git" | "Events are truth" |
| Scale Target | 20-30+ agents | Parallel workers |
| Storage | Git worktrees + Beads ledger | Embedded SQLite (PGLite) |
| Maturity | Production-ready feel | Framework-first |
Both projects address the same core challenges:
- Context Death - AI agents lose all state when they restart or hit context limits
- Coordination Chaos - Multiple agents stepping on each other's work
- No Learning - Agents don't remember what worked or failed
Gas Town calls this the "agent restart problem." Swarm-Tools calls it "context compaction survival."
Gas Town builds a hierarchical city where agents have roles:
Town Level (~/gt/)
├── Mayor - Global coordinator, work dispatch
├── Deacon - Health monitoring, patrol executor
└── Daemon - Background heartbeat monitor
Rig Level (per-project)
├── Witness - Worker manager, stuck detection
├── Refinery - Merge queue processor
├── Polecats - Transient workers (ephemeral)
└── Crew - Persistent workers (human workspace)
The signature principle is GUPP (Gas Town Universal Propulsion Principle):
"When an agent finds work on their hook, they execute immediately. No confirmation. No questions. No waiting."
This prevents the "stalled system" failure mode where restarted agents wait for instructions instead of resuming work.
Swarm-Tools uses a flatter coordinator-worker model:
Coordinator (never executes work directly)
├── Decompose task into subtasks
├── Select strategy (file-based, feature-based, risk-based)
└── Spawn parallel workers
Workers (independent execution)
├── Reserve files (mutual exclusion)
├── Execute with TDD workflow
├── Checkpoint at 25%, 50%, 75%
└── Submit for review
The coordinator is explicitly forbidden from doing work itself—it only orchestrates.
Everything lives in git:
- Beads - A JSONL ledger (
.beads/issues.jsonl) tracking all work, state, and messages - Worktrees - Each agent gets an isolated git worktree
- Hooks - Work assignment via "hook" mechanism that survives restarts
- Merge Queue - Sequential rebasing through a refinery process
The insight: if your state is in git, it's automatically versioned, distributed, and recoverable.
Everything is an event:
- Event Store - Append-only log in embedded SQLite (PGLite)
- Projections - Materialized views updated from events
- Checkpoints - State snapshots at progress milestones
.hive/issues.jsonl- Git-backed snapshot for work items
The insight: CQRS/event-sourcing patterns from distributed systems applied to AI coordination.
Addressing modes:
- Direct: mayor/, rig/witness (specific agent)
- List: list:name (fan-out to all)
- Queue: queue:name (claim-based)
- Group: @witnesses, @polecats/rig
Messages are beads with type=message. Priority levels (0-4) affect processing order. Agents MUST delete messages after handling.
// Durable primitives
DurableMailbox.send(to, message) // Fire and forget
DurableMailbox.receive(agent) // Pull messages
ask<Req, Res>(agent, request) // Request/response patternBuilt on Effect-TS for type-safe durable primitives. The DurableLock provides CAS-based mutual exclusion for file reservations.
Each worker gets its own git worktree. No file locking needed—agents work in isolated directories. The Refinery handles merging sequentially with rebasing.
reserveSwarmFiles(['src/auth/*', 'tests/auth/*']) // Lock patterns
releaseSwarmFiles() // Unlock on completeMultiple agents can work in the same directory but must reserve file patterns. Conflicts detected at reservation time.
Learning is implicit through the beads history. Patterns emerge from what's stored in the ledger. No explicit pattern ranking or anti-pattern detection mentioned in the docs.
swarm_record_outcome({
pattern: "feature-based decomposition",
success: true,
context: "OAuth implementation"
})- Patterns ranked by success rate
- Failed patterns automatically inverted to anti-patterns
- Confidence decay over 90 days
- Semantic memory via Ollama embeddings (optional)
Daemon (10 min heartbeat)
└── Monitors: Mayor, Deacon, Witness
Deacon (patrol cycles)
└── Health checks: Mayor, Witness
Witness (per-rig patrols)
└── Monitors: Polecats
└── Nudges stuck workers
└── Escalates to Mayor after 3 failed nudges
GUPP violation detection: if a polecat has hooked work but makes no progress for 30 minutes, intervention begins.
// Automatic checkpoints at milestones
swarm_progress(25) // Checkpoint stored
swarm_progress(50) // Checkpoint stored
swarm_progress(75) // Checkpoint stored
// Recovery after context death
swarm_recover() // Returns full checkpoint contextRecovery is explicit: call swarm_recover() to get the last checkpoint and resume.
- Claude Code CLI (required)
- tmux 3.0+ (recommended for session management)
- Beads (bd) 0.44.0+ (custom task tracking)
- Go 1.23+, Git 2.25+
- OpenCode (required - runs as a plugin)
- Bun (JavaScript runtime)
- Optional: CASS (history), UBS (bug scanning), Ollama (embeddings)
# Mayor dispatches work to a rig
gt sling gt-123 gastown
# Creates convoy, hooks work to polecat
# Polecat auto-starts via GUPP
# Worker executes molecule steps
bd ready # Get next step
bd show <step-id> # View details
bd close <step-id> # Mark complete
# Submit to merge queue
gt done# User initiates swarm
/swarm "Add OAuth support"
# Coordinator decomposes
swarm_select_strategy # Choose approach
swarm_decompose # Break into subtasks
swarm_spawn_subtask # Launch workers
# Workers execute
swarmmail_reserve(['src/auth/*'])
# ... TDD workflow with checkpoints ...
swarm_review # Quality gate
swarm_complete # Verification (UBS, typecheck, tests)- Mature role hierarchy - Clear separation of concerns between Mayor, Witness, Refinery
- Git-native persistence - Everything survives as git history
- Merge queue with rebasing - Sequential integration prevents conflicts
- Session infrastructure - Deep tmux integration for agent management
- GUPP principle - Elegant solution to the "restart and stall" problem
- Type-safe primitives - Effect-TS provides robust durable abstractions
- Explicit learning system - Pattern tracking with confidence decay
- Checkpoint granularity - Resume at specific progress points
- Framework-agnostic mail -
swarm-mailpackage usable independently - Monorepo structure - Clean separation between orchestration and primitives
Choose Gas Town if:
- You're using Claude Code as your agent platform
- You want git-native state persistence
- You need to scale to 20-30+ concurrent agents
- You prefer hierarchical coordination with clear roles
- You want mature session/tmux integration
Choose Swarm-Tools if:
- You're using OpenCode as your agent platform
- You want event-sourcing patterns and type safety
- You need explicit learning/anti-pattern tracking
- You prefer coordinator-worker over hierarchical models
- You want to use the mail primitives in other projects
Both projects represent serious engineering efforts to solve multi-agent AI coordination. Gas Town takes a "city management" approach with git-native persistence and role hierarchy. Swarm-Tools takes a "distributed systems" approach with event sourcing and durable primitives.
The choice largely depends on your agent platform (Claude Code vs OpenCode) and architectural preferences (hierarchical vs flat, git-native vs event-sourced).
Both are worth watching as the multi-agent AI space evolves.
Links: