| name | mece-review |
|---|---|
| description | Review a set of documents or categories for MECE (Mutually Exclusive, Collectively Exhaustive) overlap and gaps |
| argument-hint | <file-or-directory> [--strict] |
| allowed-tools | Read, Glob, Grep, Agent |
Analyze a set of documents, categories, or organizational structures for MECE (Mutually Exclusive, Collectively Exhaustive) quality. Identify overlaps, gaps, and propose cleaner boundaries.
MECE is the principle that items in a set should not overlap (mutually exclusive) and should cover the full space (collectively exhaustive). Both humans and LLMs struggle with MECE — it requires keeping multiple layers of context in mind and finding sharply defined boundaries even when they don't naturally exist.
True MECE is often unachievable. The practical goal is "semi-MECE": groupings that make sense to the audience, minimize overlap, and don't overload with too many fine-grained categories.
Parse $ARGUMENTS to determine what to review:
- If a directory: read all files and treat each as a category/document
- If a list of files: read each one
- If a single file with sections: treat each top-level section as a category
- If a list (bullet or enumerated): treat each item as a category and apply Step 4b
For each document or category, summarize:
- What it covers (its intended role)
- What audience need it serves
- Key topics and claims
Find content that appears in more than one document/category:
- Same topic covered in multiple places with different framing
- Implementation details in a strategy document (or vice versa)
- Duplicate checklists, lists, or recommendations
- Two organizing principles that don't align
For each overlap, note:
- Which documents/categories are involved
- Whether the overlap is intentional (e.g., executive summary restating details) or accidental
- Which document should own the content
Check whether the full space is covered:
- Are there topics that belong in scope but aren't addressed anywhere?
- Are there categories that logically should exist but don't?
Lists (bullet points and enumerated) are the most common MECE target. When reviewing a list, check:
Mutual exclusivity:
- Does any single item belong in two or more categories? If so, the categories overlap. Propose sharper boundary definitions or merge the overlapping items.
- Are items at the same level of abstraction? Mixing high-level ("Improve testing") with specific ("Add UUID validation test") in the same list signals a categorization problem.
- Do adjacent items cover the same ground with different wording? This is the most common list defect — two bullets that say the same thing slightly differently.
Collective exhaustiveness:
- Is there an obvious item missing? Read the list as a promise to the audience: does it cover the space they'd expect?
- Does the list's framing (title, lead-in sentence) imply coverage that the items don't deliver? A heading like "All security findings" with three bullets is a gap.
Ordering and grouping:
- Are items grouped by a consistent principle? (by priority, by phase, by component, by severity — pick one, not several)
- If the list is enumerated, does the numbering imply priority or sequence? If so, verify the order actually reflects that.
- Would subgroups with headings make the list clearer? A flat list of 13 items is harder to scan than 3 groups of 3-5.
Common list defects:
- Item that is really two items joined by "and" — split it
- Item that is a subset of another item — nest it or merge
- Item that restates the list's heading — remove it
- Inconsistent phrasing (some items are actions, others are states) — normalize
For each issue, propose a specific action:
- Trim: remove duplicated content from the non-owning document, add a cross-reference instead
- Move: relocate content to the document that should own it
- Bridge: add a brief note connecting related sections across documents
- Split: break a document that covers too much into focused pieces
- Merge: combine documents that cover the same ground
Present findings as:
- Document roles — table showing each document's intended role and audience
- Overlaps found — numbered list with specific sections and proposed fixes
- Gaps found — what's missing from the overall coverage
- What's well-placed — sections that are unique and correctly located (no changes needed)
If --strict flag is passed, treat any overlap as a problem. Otherwise, accept
intentional cross-referencing (summaries restating details, etc.) as valid.
- The creativity in MECE is finding sharply defined boundaries — this is the hard part
- Keep the audience's cognitive load in mind — slightly impure MECE is better than too many fine-grained categories
- Flag overlaps but let the user decide — MECE is a quality bar, not a rigid rule
- When in doubt, propose cross-references rather than deletion