A lightweight workflow for shipping software with AI coding agents. Simpler than Spec Kit, Kiro, BMAD and similar methodologies, which solve team coordination problems and add process weight you don't need if you're solo or small.
- The idea
- Two artifacts, two jobs
- The usual shape
- Core principles
- Scaling up and down
- What lives where
- Sizing
- Running agents in parallel
- Review
- Quality control after the agent finishes
- Generation quality
- When the system is wrong
- Anti-patterns
- Quick reference
- Templates
- Examples
- On using this document
An agent is only as good as the instructions you give it. Vague prompt, vague output. Clear prompt, clear output. Everything here is about writing better instructions.
But "better" doesn't mean "more detailed." Two kinds of thinking go into a piece of work, and they belong in two different places.
| Ticket | Spec | |
|---|---|---|
| What it is | What needs to be done. Requirements for this feature, bug, or task. | The prompt given to an agent describing what to do and the specifics that matter. |
| For a feature | Goal, acceptance criteria, out of scope, constraints worth knowing. | Technical approach, decisions the agent must follow, named files and patterns, invariants to preserve. |
| For a bug | What the issue is, how it impacts users, how to reproduce it. | If needed, the technical approach to fixing it. Often skipped. Most bug fixes go straight from ticket to agent. |
| Audience | Human stakeholders, future you, the reviewer asking "did we build the right thing?" | The agent and the reviewer asking "is this the right way to build it?" |
| When written | When the work is identified. Lives in the backlog. Refined over time. | Just-in-time, after reading the relevant code. Disposable. |
| Lifecycle | Stable. The what/why survives across implementation approaches. | Volatile. Tied to the codebase as it exists right now. |
| Lives in | Wherever you manage work. Most commonly a tracking system (Linear, Jira, GitHub Issues) for non-trivial work; can be a section of the spec for simpler setups. | The repo, alongside the code. Or the chat, for simple work. |
The clean rule: the ticket answers what we're building and why. The spec answers how we'll build it in this codebase. The spec can include a sentence or two of what/why for orientation, but its main job is the how.
The two can live as separate artifacts or as one. A tracking system holding tickets and a repo holding specs is the recommended setup for non-trivial work because it separates the two lifecycles cleanly and gives you the management benefits of a real backlog. But if you don't want a tracking system, you can put the requirements in a "What and why" section at the top of the spec and treat the spec as the single artifact. The principle is that the two kinds of thinking are kept distinct, even if they share a file. The locations are a workflow choice, not a methodology requirement.
Most of the time you're working on a project: a feature, a component, a change to an existing app. You break it into units of work, capture each as a ticket (in a tracking system if you have one, or as a top section of the spec if you don't), and work through them. For each one, you either prompt the agent directly (bug fix, obvious approach) or write a spec first (real technical decisions, unfamiliar code).
Project
│
▼
Break into tickets ──► Tracking system (Linear / Jira / GitHub Issues)
│
▼
Pick a ticket
│
▼
Is the technical approach obvious?
│ │
No Yes
│ │
▼ │
Read code, write spec│
│ │
▼ ▼
Run agent ◄───────┘
│
▼
Quality control
(tests, review, ship)
That's the workflow. Project → tickets → spec-when-needed → agent → quality control.
Tickets describe outcomes; specs describe approach. A ticket says what we want to achieve and why. It does not pre-decide the schema, the file paths, the library, the algorithm. Those choices belong in the spec, made after reading the code. Pre-deciding the how at ticket-writing time encodes implicit technical commitments before the relevant information is available, and those commitments get inherited downstream without being revisited. Over-specified tickets feel like rigour but degrade silently.
Surface the decisions in the spec. A spec's main job is to make technical decisions visible before code exists, so you can catch the ones you'd disagree with. If the agent would otherwise pick the migration tool, transaction boundaries, or error semantics on its own, the spec picks them first (with one-line reasons) and you can override. This applies to invariants too: things that must not break. Refactors have few decisions but many invariants, and still need specs.
Size the work to three limits, not one. Tickets need to be small enough that:
- The agent can execute them well. Quality drops when a single generation tries to do too much.
- Review can catch problems. Human or AI, both degrade on sprawling diffs.
- You can debug or roll back cheaply when something's wrong.
AI review has loosened limit 2. It hasn't touched 1 or 3. Small tickets still win.
Keep code as the source of truth. Specs exist to get the code right. Once the code is right, the spec's job is done. Don't maintain specs forever. They drift, and drifted specs lie. If a decision's reasoning needs to live on (rejected alternatives, business constraints), put it in a commit message, PR description, or short ADR. Not in the spec.
Compress the context. Every word you give an agent takes up attention. Every sentence competes with every other sentence for the model's focus. Say things in the fewest words that still make them clear. Cut restated rules, overlapping instructions, padding phrases, and obvious preamble. This applies to specs, tickets, AGENTS.md files, skill descriptions, system prompts, and anything else an agent reads. A one-page instruction with no redundancy produces better output than a three-page instruction where the same rules are stated four different ways. When in doubt, cut. If the agent misses something, add it back, but assume compression first.
Skip what isn't earning its weight. A bug fix needs a prompt, nothing more. A feature with real technical decisions needs a spec. Most work sits between those. Using a layer out of habit is theatre.
The shape above covers most work. It scales in both directions.
Simpler than the usual shape. For exploration, vibe coding, or quick experiments, skip the whole thing. Open the editor. Run the agent. See what happens. Process weight slows learning, and learning is the point. If what you're building turns out to be worth shipping, add structure retroactively.
Heavier than the usual shape. When building an entire application end-to-end, or working to a client contract with milestones, the project is large enough to need its own brief and to break into phases. A phase is a coherent chunk of the project with its own set of tickets (backend API, frontend, integrations), each ending in something you can ship or demo. Phases are useful for large greenfield work and for client projects with billing milestones. They're rare in day-to-day work, and most projects don't need them.
When you do need phases, the shape becomes:
Project brief ─► Phase ─► Tickets ─► Spec ─► Agent ─► QC
│
└──► Phase ─► Tickets ─► ...
│
└──► Phase ─► ...
Same workflow, with two extra layers on top when the work is big enough. If you're not building a whole app from scratch or working to a contract, you almost certainly don't need this.
For non-trivial work, tickets in a tracking system are worth the setup. Once you have twenty tickets across three weeks, ls docs/tickets/ stops being readable, and a tracking system gives you status, filters, cross-time visibility, and the management benefits of a real backlog. Linear, Jira, GitHub Issues, or whatever you use. The discipline of writing the requirements in a separate place from the technical design also helps you think: it forces you to articulate the what and why before you've started picking the how.
If you don't want a tracking system, put the ticket content as a "Requirements" or "What and why" section at the top of the spec. The two kinds of thinking stay distinct; they just share a file.
Specs go in the repo. They reference files by path and need to version with the code. Write them close to implementation; discard them when the code stabilises.
Trivial prompts go in chat. If the whole instruction fits in a sentence, don't save it as a file.
Briefs (project, phase) go in the repo when they exist at all, which is rare.
Two things determine how heavy the workflow gets for a piece of work.
Whether the technical approach is obvious sets whether you need a spec. If the codebase already answers the how (established patterns, no new technical territory), no spec. Just prompt. If the work introduces decisions the existing code doesn't already answer, write a spec to surface them.
How much work the ticket represents sets ticket size. Check it against the three limits in the principles. If a ticket fails any of them, split it. If two halves would need to make the same decision, you split along the wrong seam. Decisions should cluster on one side of the cut.
The dangerous quadrant is large work with many open technical decisions. The agent degrades across the generation, review catches the least, bugs are hardest to locate. Split aggressively.
Default is serial: one ticket, one agent at a time. If you run several in parallel, tickets that touch the same files or the same decision cluster can't run concurrently. Either serialise them, or split further so their surfaces don't overlap. Agents are less likely than humans to notice the conflict before it lands in your branch.
Review is the check that catches what the spec missed. Today it's mixed: AI handles most first passes, you spot-check and handle decisions AI can't reliably judge. The balance keeps shifting.
What doesn't shift: you own the output. Whoever reviewed it, you shipped it. The quality floor is your skill at building a review setup that actually catches things, which includes verifying your reviewer is doing real work. Spot-check AI approvals. Watch for patterns of approved-but-wrong code. Plant occasional mistakes to see if they get caught.
Non-trivial work gets two review passes:
- Spec review, before execution. Are the decisions and invariants surfaced? Any you'd disagree with? Edit until you agree with everything the spec commits to.
- Output review, after execution. Does the diff do what the ticket asked for? Are there technical decisions in the code that weren't in the spec? Did any invariant break silently? Tests passing isn't enough.
Skip or rubber-stamp either pass on work that matters and the workflow collapses into vibe coding with extra steps.
Once the agent is done, the code goes through the usual engineering checks before shipping: tests run and pass, a code review skill checks the diff, then the change gets pushed to GitHub (or wherever). This lives outside the scope of this document (any decent quality-control setup works), but it's the other half of the loop. Spec discipline gets the code close to right the first time; quality control catches what discipline missed.
One limit that's easy to miss because it happens before review: agents get worse when you ask for too much in one shot. This is true even when the ticket is sized right.
Why: a model's attention distributes across the task. Ten things means each gets a thinner slice of reasoning. Errors compound because each token becomes context for the next: a small early mistake gets built on instead of caught. And the model's own self-checking runs out the same way a human's does.
The fix isn't just smaller tickets. It's also smaller steps within a ticket. Instead of "do X, Y, and Z," prompt "first do X. Once that works, do Y." Let each step end in a runnable state. Same ticket, better output.
Review surfaces problems. That's its job. But when the same kind of problem keeps appearing (the agent keeps missing error handling, keeps picking the wrong library, keeps breaking the same invariant), the fix isn't in this spec. It's in the spec skill. Update the template so the thing you keep catching becomes a default thing to surface.
Same for ticket sizing. If review keeps missing things, or the agent keeps sprawling, or past bugs keep being hard to find, tickets are failing one of the three limits. Fix it in the plan skill: tighter decomposition, smaller defaults. Don't fight it at review time.
Tickets that pre-decide the how. "Add a jobs table with columns id, company, title, location..." looks like requirements but it's actually design. It locks in a schema before anyone reads the code. The high-level version ("persist scraped jobs with dedupe across runs") is more honestly specified, because it commits only to what's actually known at ticket-writing time. Pre-deciding the how removes degrees of freedom from whoever has the code in front of them later.
Specs written to follow methodology. If the spec isn't surfacing decisions worth reviewing, you wrote it for yourself. Delete it.
Tickets too big. "Build the API" as one ticket. The agent's output degrades, review misses things, bugs are unlocatable. Split until each ticket fits all three limits.
Prompts too big inside a good ticket. Asking for six things in one shot when the ticket allows five smaller steps. Break the execution into phases.
Tickets too small. Thirty minutes of setup for fifteen minutes of work. Group them.
Phases where they don't belong. Adding a phase layer to a project that doesn't naturally break into chunks. If the project is one continuous push, just use tickets.
Spec as living document. Keeping specs synced with code as it evolves. Sounds sensible, never works. Let the spec do its job and die.
Spec written only for the agent. A clean list of facts with no visible decisions. The reviewer can't catch what they can't see. The spec is primarily for the reviewer, but it still needs named files, testable requirements, and structure the agent can execute from. Both audiences.
Verbose instructions. Specs, tickets, or agent-facing files that say the same thing three ways, state the obvious, or pad ideas with hedging. Every extra word dilutes the agent's attention on the ones that matter. Cut restated rules, overlapping instructions, and preamble. Compression is a feature, not a style choice.
Rubber-stamping. Approving output because it looks plausible. Or approving an AI's approval without spot-checking. Same failure, different levels. If you can't verify review was real, the approval is decorative.
Heavyweight tooling. Spec Kit, Kiro and similar tools exist to coordinate teams. If you don't have a team, that process weight is pure overhead. A tracking system, a docs folder, and a few skill files is all you need.
Before handing work to an agent:
- Will the agent have to hold too much in its head? If yes, split the ticket or break the prompt into steps.
- Is the technical approach obvious from existing patterns? If yes, prompt directly. If no, write a spec.
- Is this part of multi-day work? If yes, track it as a ticket. If no, just run the agent.
- Will your review setup catch problems, and can you revert or debug cheaply if it's wrong? If no, split further.
Minimum workflow for a single change: ticket or prompt → agent → review.
Usual workflow: project → tickets → specs where needed → agent → quality control, per ticket.
Heavy workflow (only when needed): project brief → phases → tickets → specs where needed → agent → quality control, per ticket.
Starting points, not rigid structures. Delete sections that don't apply. Add sections the work demands. Match template weight to work weight.
The what and the why. Lives in the tracking system if you use one; otherwise lives at the top of the spec as a "Requirements" or "What and why" section. Describes outcomes and constraints, not technical approach.
## Goal
One or two sentences. What this accomplishes from a user perspective.
## Context
Why this, why now. What changed or what's painful that this resolves. Link to related work.
## Acceptance criteria
- Testable outcomes, one per bullet
- Phrased as outcomes, not implementation steps
- A user or operator can observe each as true after the change
## Out of scope
What this ticket doesn't do, even if related. Pre-empts scope creep.
## Notes
Constraints, known pitfalls, links. Things the implementer should know that aren't outcomes.What this template deliberately doesn't include: a "Proposed approach" section, a schema, file paths, library choices, or any other how. Those go in the spec.
A specialised ticket for fixing something broken.
## What's broken
One or two sentences. What's happening that shouldn't be.
## Impact
Who's affected and how. Severity. Frequency.
## Reproduction
Steps to make the bug appear. Include the environment if it matters.
## Expected behaviour
What should happen instead.
## Notes
Suspected cause, related changes, anything the implementer should know.Most bug tickets don't need a spec. The fix usually has an obvious approach once the cause is known.
The how. Lives at docs/<feature-name>/spec.md. Written after reading the codebase.
# Spec: <feature-name>
Ticket: <ticket-id, or remove this line if requirements are in the section below>
## Requirements (the what and why)
If the ticket lives elsewhere (a tracking system), this is one or two sentences from the ticket for orientation. Not a re-articulation. Just enough to read the spec without bouncing back.
If there is no separate ticket, this is the full Requirements section: goal, acceptance criteria, out of scope, constraints worth knowing.
## Codebase context
What exists today that this touches. Named files and modules, using full repo-relative paths, not "the jobs module."
## Approach
How we're building it, grounded in the codebase. Named files to create or modify. Patterns to follow.
## Decisions
Technical decisions the agent would otherwise make alone, surfaced with the choice and a one-line reason.
- <what>: chose <the pick>, because <reason>
## Invariants
What must not break. Behaviour to preserve, contracts to honour, downstream systems to keep working. Skip when nothing fragile is in scope.
- <what must keep working>, verified by <how the reviewer checks it still does>
## Error behaviour
How failures manifest. Which are recoverable. What the user sees. Skip when defaults are fine.
## Testing strategy
What will be tested, where, and which acceptance criterion each test proves. Every acceptance criterion from the ticket maps to a named test or observable check.When the technical approach is obvious from existing patterns, the spec is a paragraph in chat or alongside the ticket. No file. Example:
Add a
--verboseflag tosrc/cli/scraper.tsthat enables debug logging viasrc/lib/logger.ts. Follow the--dry-runpattern. Add tests intests/cli.spec.ts.
If this is enough for the agent to execute without making decisions you'd want to review, skip the spec file and go.
A reusable prompt for compressing an agent-facing instruction file. Paste the file in at the bottom.
Compress this instruction for an AI coding agent. Goal: fewest words,
maximum clarity. Every word takes up the model's attention and competes
with every other word for focus.
Cut:
- Restated rules (same rule said three different ways)
- Overlapping instructions (rule B is already implied by rule A)
- Padding phrases ("it is important to", "please make sure to", "in general")
- Obvious preamble (stating what the doc is for, what follows, etc.)
- Hedging and qualifiers that don't change behaviour
- Examples that restate the rule without adding new information
Keep:
- The actual rules and constraints
- Concrete examples that show a format or edge case the rule alone doesn't convey
- File paths, exact names, specific identifiers
- Any instruction that has bitten you before
Rules:
- Preserve meaning exactly. Don't drop rules to hit a word count.
- Merge sections that overlap. One header per distinct concern.
- Prefer bullets over prose when the content is a list of rules.
- Prefer prose over bullets when the content is a single connected idea.
- If a rule is a general case with a specific exception, state the
general case and the exception in one sentence, not two.
Return the compressed version only. No commentary, no diff, no
explanation of what was cut.
Here's the instruction to compress:
<paste the file>
After running the compression, use the output on a real task and watch the agent's behaviour. If something changes, the compression removed something load-bearing. Add it back. Assume compression first; restore on evidence.
Lives at docs/project.md or the repo README. Short.
# <Project name>
## What we're building
One or two paragraphs. What this is and what it becomes when done.
## Why now
What triggered this. What's painful if we don't build it.
## Success looks like
The observable state when the project is done. User-visible where possible.
## Shape
Rough architecture. Major components. Key technology choices. Enough to orient, not enough to design.
## Phases (if any)
1. <Phase name>: one-line summary
2. <Phase name>: one-line summary
## Out of scope
What we're not building.Lives at docs/phases/<phase-name>.md or at the top of a tracking-system project.
# Phase <N>: <name>
## Goal
One paragraph. What this phase delivers.
## Acceptance
What must be true for the phase to be done. Observable where possible.
## Tickets
- <ticket-id>: <title>
- <ticket-id>: <title>
## Phase-level decisions
Decisions that apply across all tickets in this phase. Record once here; don't repeat in each ticket.
- <decision>: <the pick>, because <one-line reason>
## Out of scope
What this phase doesn't cover.Not a form, it's a discipline. Before approving non-trivial output, walk through:
Does the diff achieve what the ticket asked for? Every acceptance criterion verifiable in the code? Any code not traceable to a requirement? Any technical decisions in the code that aren't in the spec's Decisions section? Any invariants quietly broken? Are the tests meaningful, exercising observable behaviour rather than just internal shape? Would you have done anything differently, and if so, is it preference or correctness?
If something real surfaces, the spec needs an update or the ticket needs rework. If it all passes, ship and move on.
Two worked examples showing what the artifacts look like for a real project. The project below is fictional but realistic: a small internal tool that scrapes public job listings and extracts the companies posting them as sales leads.
# Jobs-to-Leads
## What we're building
A tool that scrapes public job listings and uses them to discover companies
worth contacting for a small sales team. The job data is the raw material;
the companies hiring are the actual product.
The tool prioritises signal quality over volume: a handful of well-qualified
companies is more useful than a firehose of noisy results.
## Why now
A small sales team needs better lead discovery than cold lists provide.
Companies that are actively hiring give a strong "right time" signal.
## Success looks like
The team has a reviewable list of companies to contact, derived from real
hiring activity, refreshed regularly.
## Shape
Start with a CLI that scrapes one source and exports a CSV. Persistence,
review UI, and enrichment come later, only after the data shape proves
useful.
## Phases
1. Scrapers and Spreadsheets: a CLI produces CSV exports with enough signal
to validate the lead thesis
2. Database and Backend API: validated data lands in Postgres; minimal read
API exists
3. Minimal Review UI: users review leads without touching the CSV
4. Company Research and Enrichment: discovered companies are enriched with
external data
5. Product Polish and Workflow Features
6. Deployment and Operations
## Out of scope (first version)
- Candidate or contact sourcing workflows
- Automated outreach
- Direct CRM sync
- Multi-tenant administration
- Complex analyticsSix tickets covering "Scrapers and Spreadsheets." None of these need a spec file: the technical approach for each is either obvious from existing patterns or simple enough to capture in the ticket's notes. The spec layer starts earning its keep in Phase 2, where persistence introduces real schema and migration decisions.
Notice how each ticket describes outcomes, not implementation. Where field names or column names appear, they're outcomes ("the operator can identify duplicates") rather than design instructions. The agent picks the implementation after reading the code.
## Goal
Establish a contract that future job sources can implement, so adding new
sources later doesn't require reshaping the collection pipeline.
## Context
Phase 1 starts with one source but expects more. The interface is defined
first, so the first real source has something to implement against.
## Acceptance criteria
- A source contract exists with a clear input (query) and output (job rows)
- A stub implementation conforming to the contract can be exercised end-to-end
- Adding a new source in future requires no changes to the generic collection flow
- Tests cover the generic flow against the stub
## Out of scope
Real scraping (next ticket). Dedupe. CSV export. Persistence.
## Notes
The shape of a job row is intentionally underspecified here so the first real
source can inform it, but the contract should be stable enough to test against.## Goal
Replace the stub source from JOB-1 with a real implementation against the
chosen public job board.
## Context
First real source. Has to integrate with the contract from JOB-1 without
reshaping it.
## Acceptance criteria
- Configured keyword and location searches return parsed job rows
- A single failed page does not abort a full run
- The output contains enough information per job to support deduplication
and to identify the company hiring
- Tests cover parsing against a fixture response
## Out of scope
Other sources. Dedupe (later ticket). CSV export (later ticket).
## Notes
Choice of HTTP client, parser, and rate-limiting approach is left to the
implementer. The contract from JOB-1 should not need changes; if it does,
flag it.## Goal
Make the scraper's output reviewable by a non-technical user before any
database is involved.
## Context
The point of Phase 1 is to validate that the data is useful. A spreadsheet
is the cheapest way to enable that validation.
## Acceptance criteria
- A single command runs the scraper and produces an output file the
recipient can open in Excel, Numbers, or Google Sheets
- The output is comparable across runs (consistent shape)
- The command is suitable for cron: no interactive prompts, sensible
exit codes
- The recipient can answer "what jobs were found, by which companies, where"
from the output alone
## Out of scope
Database persistence. Dedupe (next ticket). Multiple output formats.## Goal
Stop the same job appearing several times in the output when overlapping
keyword searches happen to find it.
## Context
Multiple keyword searches in a single run produce overlapping results.
Without dedupe, the output is noisy and the recipient can't tell how many
real jobs there are.
## Acceptance criteria
- Overlapping searches produce an output where each unique job appears once
- The dedupe behaviour is correct even when some jobs lack a stable
identifier from the source
- Tests cover both the with-identifier and without-identifier paths,
including the edge case of missing fields
## Out of scope
Cross-run dedupe (no persistence yet). Fuzzy matching on company names.## Goal
Produce a separate, easier-to-review output that lists companies rather
than jobs, since companies are the actual product.
## Context
The jobs output is detailed but noisy. The recruiter mostly wants to know
which companies were discovered and how strongly each appeared.
## Acceptance criteria
- A second output exists listing each unique company with enough information
to prioritise it (e.g. how many jobs, which keywords found them)
- Trivial variations (whitespace, casing) of the same company name don't
produce duplicate rows
- Both outputs are produced from the same command
## Out of scope
Fuzzy company matching across genuine variations (e.g. Acme vs Acme Inc).
External enrichment. Scoring or ranking.## Goal
Decide whether the chosen source is sufficient for Phase 2, based on real
data rather than assumptions.
## Context
The previous five tickets ship the tool. This one uses it. Its output is
a decision, not code.
## Acceptance criteria
- Seven daily runs completed with a representative keyword and location set
- A written summary describes what worked and what didn't, grounded in the
observed output
- A go/no-go decision on Phase 2 with reasoning tied to specific examples
from the runs
## Out of scope
Implementing additional sources. That decision is the output of this ticket,
not its input.
## Notes
This ticket doesn't produce code. Size the time accordingly: several days
elapsed, around an hour a day of active work.A few patterns worth calling out:
None of these tickets pre-decide the technical approach. Read them again. JOB-2 doesn't say "use BeautifulSoup" or "parse with regex." JOB-3 doesn't say "CSV" or specify column names. JOB-5 doesn't say "comma-separated keywords field." Where specific outcomes matter, they're phrased as outcomes ("the recipient can open this in Excel," "trivial variations don't produce duplicate rows"). The implementer picks the approach.
JOB-1 and JOB-2 were split deliberately. Defining the interface and implementing the first real source are different kinds of work. Splitting means JOB-1 can be reviewed on shape without arguing about parsing, and interface problems get found before JOB-2 builds against them.
JOB-6 is an unusual ticket. It produces a decision, not code. Its acceptance criteria describe outcomes of using the thing, not outcomes of building it. Worth tracking because validation is real work and deserves visibility, but sized deliberately differently from implementation tickets.
No spec files for any of these. The technical approach in each case is either obvious from established patterns or simple enough to be safely left to the agent. Phase 2 will be different. Once Postgres, migrations, and schema enter the picture, specs start earning their weight.
It's a philosophy, not a checklist. Adjust when adjustment serves the work. Drop what doesn't.
When in doubt: the ticket is the what and why. The spec is the how. Most work needs only the ticket. Specs appear when the codebase doesn't already answer the how. Everything else is infrastructure to keep work inside the three sizing limits. Keep what helps. Drop what doesn't.
Code is the deliverable. Specs are scaffolding. Tracking is logistics. Review is the work, whoever's doing it.