Over the last few months, projects like Gas Town by Steve Yegge and OpenClaw by Peter Steinberger have made “AI agent orchestrators” feel suddenly mainstream. It is tempting to treat them as a new kind of intelligence, but under the hood they are still a small set of primitives wired together with discipline: an LLM API call, a state loop, tools, memory, and orchestration.
This raises a practical question: what is actually inside an “agent,” and how is it different from ChatGPT (a chat UI over a model) or coding tools like Claude Code (an agentic coding surface)? Gas Town’s README frames it as a “multi‑agent orchest
