This framework formalizes a workflow that consistently produces clean, professional, and maintainable software when working with AI coding agents. It is designed to minimize hallucinations, scope drift, and “AI slop” while preserving velocity and creative leverage.
The core idea is simple: treat AI as a highly capable but short-term contractor that must be re-briefed from first principles at every meaningful phase of work.
Every successful AI-assisted project I’ve run converges on the same prerequisite structure. Skipping or weakening any of these documents reliably degrades output quality.
This is a single, long-form, monolithic Markdown document that defines the project in exhaustive detail.
Characteristics:
- Structured, hierarchical, and explicit
- Written for machines and humans
- Free of ambiguity, hand-waving, or implied logic
- Stable over time, evolving only through controlled edits
Purpose:
- Serves as the global source of truth
- Anchors all AI behavior across every sprint
- Eliminates the need for agents to “infer intent”
This document is always attached to AI interactions. It is never summarized, paraphrased, or partially re-expressed for the agent.
The Plan of Action is derived from the spec, not parallel to it.
It translates the monolithic spec into:
- A chronological sequence of narrowly scoped coding sprints
- Each sprint small enough to complete in a single focused session
- Each sprint large enough to produce tangible progress
Key properties:
- Explicit boundaries (“do X, do not touch Y”)
- References specific sections of the spec when context is required
- Optimized to prevent hallucinations caused by long-context reasoning
The Plan of Action is not creative. It is procedural.
Development proceeds strictly sprint-by-sprint using AI coding agents inside Visual Studio Code.
Every sprint must begin the same way. Deviations here are one of the fastest ways to corrupt a project.
-
Start a fresh chat
- Never reuse a prior sprint conversation
- Context carryover causes subtle contradictions and scope bleed
-
Attach required documents
- Project Technical Specifications
- Plan of Action
-
Paste the sprint instructions verbatim
- Do not paraphrase
- Do not summarize
- Treat the sprint text as executable instructions
-
Explicitly instruct the AI to:
- Implement only the current sprint
- Update the root
README.mdto reflect changes - Commit all changes via git with a clear, intentional commit message
This framing positions the agent as a deterministic executor, not a collaborator guessing intent.
A sprint is considered complete only when:
- The requested functionality exists and works
- Documentation reflects reality
- The repository is in a clean, committed state
No “I’ll do that next sprint” debt is allowed inside a sprint.
Repeat this process until the project reaches MVP status.
Once an MVP exists, development shifts from creation to controlled evolution.
The rules become stricter, not looser.
Maintenance cycles reuse the same mental model but operate on differences, not greenfield intent.
The original Technical Specification document is never replaced.
Instead:
- Ambiguities are clarified
- Bugs and flawed assumptions are corrected
- New requirements are appended with precision
Hard rules:
- Strict versioning is mandatory
- Every edit must be intentional and traceable
- The spec remains the canonical artifact
The diff between spec versions becomes the primary driver of work.
For each spec revision:
- AI is used to analyze the delta between versions
- Only those deltas are translated into a new Plan of Action
- The Action Plan version matches the spec version exactly
This ensures:
- No accidental re-implementation of stable components
- No silent scope expansion
- Clear auditability of why changes exist
Each maintenance sprint follows the exact same execution protocol as initial development:
- Fresh chat per sprint
- Attach updated spec and matching action plan
- Paste sprint text verbatim
- Enforce documentation and git hygiene
The only difference is what the sprint targets—not how it is run.
This framework succeeds because it aligns with how AI systems actually behave:
- They are strong executors, weak historians
- They respect explicit structure more than implied intent
- They degrade rapidly under long, meandering conversations
- They perform best when treated as stateless specialists
By externalizing memory into documents and resetting conversational context at every sprint, you prevent entropy from accumulating inside the model.
The result is software that feels intentionally designed rather than statistically assembled.