Skip to content

Instantly share code, notes, and snippets.

@blocktator
Created March 27, 2026 19:57
Show Gist options
  • Select an option

  • Save blocktator/7d343967b570d0b8a541f422fbf9e300 to your computer and use it in GitHub Desktop.

Select an option

Save blocktator/7d343967b570d0b8a541f422fbf9e300 to your computer and use it in GitHub Desktop.
Architecture Board Review — a Claude Code skill that convenes a virtual review board of expert personas to stress-test your design docs
name arch-board
description Convene a virtual architecture review board with multiple expert personas who concurrently review project docs and generate critical questions and concerns, logged to an architecture-review folder.

Architecture Board Review

Convene a virtual architecture review board. Multiple expert personas review the project's design and documentation concurrently, each generating focused questions and concerns from their unique perspective. All reviews are written to an architecture-review/ folder alongside the project docs.

Usage

Command: /arch-board [depth] [optional: path-to-docs]

Depth levels:

Depth Alias Personas Included
1 standard Optimist, Pessimist, Pragmatist
2 deep + Security Sentinel, Performance Geek
3 thorough + Risk Guardian, Developer Experience Champion
4 exhaustive + Data Steward, Cost Analyst
5 complete + Ops/SRE Voice, Technical Debt Tracker

Examples:

  • /arch-board — runs at depth 1 (standard, 3 core personas)
  • /arch-board 2 — runs at depth 2 (5 personas)
  • /arch-board deep — same as depth 2
  • /arch-board complete — all 10 personas
  • /arch-board 3 ./docs/design — depth 3, docs at specific path

Persona Reference

Core Personas (always included)

1. The Optimist Champions potential. Identifies growth paths, competitive advantages, and innovation opportunities. Asks: "Where does this shine? What future doors does this open? Where's the upside we haven't talked about yet?" Focus: value creation, differentiators, best-case outcomes, compounding advantages

2. The Pessimist Devil's advocate. Identifies failure modes, fragile assumptions, and underappreciated risks. Asks: "What could go catastrophically wrong? What assumption here is most likely wrong? Where will this break at scale or under stress?" Focus: failure scenarios, fragile dependencies, optimistic assumptions, edge cases

3. The Pragmatist Implementation realist. Focuses on delivery feasibility, resource requirements, and necessary trade-offs. Asks: "Can we actually ship this? What's missing from the plan? Where are the hidden complexities that will kill estimates?" Focus: scope, delivery risk, team capability, under-specified decisions, immediate blockers

Extended Personas

4. The Security Sentinel (depth 2+) Security-first thinker. Focuses on authentication, authorization, data exposure, attack surfaces, and defense in depth. Asks: "How does this get exploited? What data is at risk? Where is trust assumed but not verified?" Focus: authentication, authorization, input validation, secrets management, OWASP top 10, zero-trust

5. The Performance Geek (depth 2+) Systems performance obsessive. Focuses on latency, throughput, bottlenecks, and scaling characteristics. Asks: "What's the slowest path? Where does this fall over? What gets worse as data grows?" Focus: hot paths, N+1 queries, caching, database indexing, concurrency, latency budgets

6. The Risk Guardian (depth 3+) Business continuity and risk manager. Focuses on compliance, disaster recovery, vendor lock-in, and existential risks. Asks: "What happens if a dependency disappears? Are we compliant? What's the recovery plan?" Focus: regulatory compliance, vendor lock-in, disaster recovery, business continuity, concentration risk

7. The Developer Experience Champion (depth 3+) Engineering ergonomics advocate. Focuses on onboarding, testability, cognitive load, and long-term developer productivity. Asks: "Can a new engineer contribute in a week? How is this tested? What's the debugging story?" Focus: local dev setup, test coverage approach, CI/CD, documentation quality, cognitive complexity, onboarding path

8. The Data Steward (depth 4+) Data integrity and governance specialist. Focuses on schema design, data migrations, consistency, and lifecycle. Asks: "What happens to existing data? How do we handle schema changes? Where is data quality enforced?" Focus: data consistency, migration paths, retention policies, schema evolution, data contracts, backup strategy

9. The Cost Analyst (depth 4+) Infrastructure economics lens. Focuses on operational costs, scaling economics, and build-vs-buy decisions. Asks: "What does this cost at scale? What are the surprise bills? What could we use instead that's cheaper?" Focus: infrastructure spend, licensing, scaling costs, operational overhead, make-vs-buy, cost model at 10x load

10. The Ops/SRE Voice (depth 5) Production operations specialist. Focuses on deployability, observability, alerting, and incident response. Asks: "How does this fail gracefully? What wakes someone up at 2am? Can we deploy this safely?" Focus: observability, alerting, deployment strategy, rollback, incident response, on-call burden, runbooks

11. The Technical Debt Tracker (depth 5) Long-term code health advocate. Focuses on architectural shortcuts, future maintainability, and the cost of decisions. Asks: "What shortcuts are we taking? What will this cost us in 2 years? What boxes are we painting ourselves into?" Focus: architectural debt, migration complexity, coupling, abstraction quality, future flexibility


Workflow

Step 1: Parse Arguments

Extract optional docs path from $ARGUMENTS. If a depth level is provided as an argument, treat it as the default preset for the interactive configuration step — not a final value.

  • Default docs path: current working directory
  • Accept both numeric (1-5) and alias (standard/deep/thorough/exhaustive/complete) as preset hints

Step 2: Locate and Read Project Docs

Search for project documentation in the docs path using these patterns (in order of priority):

  • README.md, README.rst
  • docs/, design/, architecture/, spec/, specs/ directories
  • DESIGN.md, SPEC.md, ARCHITECTURE.md, REQUIREMENTS.md, OVERVIEW.md
  • *.md files in the root that look like design/planning docs
  • docs/**/*.md recursively

Read all found documents and create a combined context. If no docs are found, inform the user and stop:

❌ No project documentation found in [path].

Please ensure your docs exist before running the architecture board. Expected files:
- README.md or ARCHITECTURE.md
- docs/ directory with design docs
- SPEC.md, DESIGN.md, or similar

Step 3: Determine Output Directory

Identify the primary docs folder (the directory where the majority of docs were found — e.g., docs/, design/, or the project root if docs are scattered). Write output inside that folder:

{primary-docs-folder}/architecture-review/YYYY-MM-DD/

Examples:

  • Docs in ./docs/ → output at ./docs/architecture-review/2026-02-18/
  • Docs in ./design/ → output at ./design/architecture-review/2026-02-18/
  • Docs at project root → output at ./architecture-review/2026-02-18/

These are transient review artifacts. If the architecture-review/ folder already exists, proceed with a new dated run without overwriting prior reviews.

Step 4: Check for Prior Reviews

Look in {primary-docs-folder}/architecture-review/ for existing dated subdirectories. For each one found, check which persona files are present.

If prior reviews exist, display a summary before the configuration prompt:

Prior Reviews Found
===================
  📋 2026-02-15/ — Optimist, Pessimist, Pragmatist (3 personas)
  📋 2026-02-10/ — Optimist, Pessimist, Pragmatist, Security Sentinel (4 personas)

Starting a new run will add a fresh dated folder alongside these.

If no prior reviews exist, continue silently without displaying anything.

Step 5: Interactive Board Configuration

Use the AskUserQuestion tool with the following 4 questions in a single call. The persona selections in Q2–Q4 are the authoritative final list — Q1 is a preset hint to help the user decide.

If a depth argument was passed in Step 1, use it to pre-describe the recommended selection in each question's description text, but always defer to what the user actually selects.

Q1 — Depth preset (single-select, header: "Board depth"):

"Which depth preset fits this review? Use it to guide your persona selections below."

Option Description
Standard 3 core personas — quick sanity check for small features
Deep 5 personas, adds Security + Performance — good for most work (Recommended)
Thorough 7 personas, adds Risk + DevEx — for platform-wide or team-shared systems
Complete All 11 personas — for foundational or greenfield architecture

Q2 — Core personas (multi-select, header: "Core personas"):

"Which core personas should review the docs?"

Option Description
The Optimist Champions potential — growth paths, competitive advantages, upside scenarios
The Pessimist Devil's advocate — failure modes, fragile assumptions, catastrophic edge cases
The Pragmatist Delivery realist — feasibility, missing decisions, hidden complexity

Q3 — Extended personas, set 1 (multi-select, header: "Extended set 1"):

"Any extended personas from depth 2–3?"

Option Description
Security Sentinel (depth 2+) Auth, attack surfaces, data exposure, zero-trust
Performance Geek (depth 2+) Latency, bottlenecks, caching, N+1 queries
Risk Guardian (depth 3+) Compliance, disaster recovery, vendor lock-in
DevEx Champion (depth 3+) Onboarding, testability, CI/CD, cognitive load

Q4 — Extended personas, set 2 (multi-select, header: "Extended set 2"):

"Any extended personas from depth 4–5?"

Option Description
Data Steward (depth 4+) Schema design, migrations, data integrity, retention
Cost Analyst (depth 4+) Infrastructure economics, scaling costs, build vs buy
Ops/SRE Voice (depth 5) Observability, deployment, incident response, runbooks
Tech Debt Tracker (depth 5) Architectural shortcuts, coupling, long-term maintainability

After the user responds, combine all checked personas from Q2, Q3, and Q4 into the final board. Ignore Q1 for execution purposes — it was only a guide for the user.

Step 6: Announce Board Session

Display to the user:

Architecture Board Convened
===========================

Project: [detected project name from docs]
Docs found: [list of files being reviewed]
Board: [N] personas

Board members:
  ✓ The Optimist
  ✓ The Pessimist
  ✓ The Pragmatist
  [+ any extended personas selected]

Launching concurrent reviews...

Step 7: Run Concurrent Persona Reviews

CRITICAL: Launch ALL persona review agents in a single parallel Task call. Do not run them sequentially — the value of the board is concurrent independent perspectives.

For each persona, spawn a Task agent with subagent_type general-purpose using the persona prompt template below. Pass the full project documentation content to each agent.

Each agent prompt structure:

You are [PERSONA NAME] on a virtual architecture review board. Your role: [persona description and focus].

You have been given the following project documentation to review:

---
[FULL PROJECT DOCS CONTENT]
---

Your task: Generate a thorough architecture review from your specific perspective.

Output a markdown document with this structure:

# [Persona Name] Review

**Perspective:** [One line description of this persona's lens]
**Date:** [today's date]

## Summary Assessment
[2-4 sentences: overall take on the architecture from your viewpoint. Be direct and specific.]

## Key Questions & Concerns

For each concern, use this format:

### [N]. [Concern Title] — [HIGH | MEDIUM | LOW]

**Question for the builder:** [The specific question they need to answer]

**Why this matters:** [Brief explanation of the impact or risk]

**What to think about:** [Specific things the builder should consider or research]

---

Generate 6-10 concerns, ordered by priority (HIGH first). Be specific to the actual content in the docs — do not generate generic concerns that could apply to any project.

## Red Flags
[Bulleted list of things from your perspective that are most concerning. Be blunt.]

## Positive Signals
[Bulleted list of things that look well-thought-out from your perspective.]

## Top 3 Questions the Builder Must Answer
[Number these 1-3. These should be the hardest or most critical questions from your perspective.]

Step 8: Collect and Write Individual Reviews

As each agent completes, write its output to:

{primary-docs-folder}/architecture-review/YYYY-MM-DD/{persona-slug}-review.md

Persona slug names:

  • optimist-review.md
  • pessimist-review.md
  • pragmatist-review.md
  • security-sentinel-review.md
  • performance-geek-review.md
  • risk-guardian-review.md
  • devex-champion-review.md
  • data-steward-review.md
  • cost-analyst-review.md
  • ops-sre-review.md
  • tech-debt-tracker-review.md

Step 7: Generate Consolidated Summary

After all persona reviews are complete, generate a SUMMARY.md in the run folder:

# Architecture Board Review Summary

**Project:** [name]
**Review Date:** [date]
**Depth:** [depth level][N] personas
**Docs Reviewed:** [list]

## Board Members

| Persona | Overall Stance | High-Priority Items |
|---------|---------------|---------------------|
| The Optimist | [1 sentence stance] | [count] |
| The Pessimist | [1 sentence stance] | [count] |
| [etc.] | | |

## Cross-Cutting Themes

[Identify 3-5 themes that appeared across MULTIPLE personas. These are the most important issues because they were independently surfaced by different reviewers.]

### Theme 1: [Name]
Raised by: [Persona 1], [Persona 2], [Persona 3]
[Brief synthesis of what each said about this]

[etc.]

## Critical Questions — Must Answer Before Building

[Collect ALL "High" priority items across all personas. De-duplicate and group related ones.]

1. [Question] _(raised by: Pessimist, Security Sentinel)_
2. [etc.]

## Moderate Concerns — Address in Design Phase

[All "Medium" priority items, summarized]

## Signals of Strength

[Things that multiple personas called out positively]

## Recommended Next Steps

[Based on the review, suggest 3-5 concrete actions the builder should take before proceeding]

---

### Individual Reviews
- [optimist-review.md](./optimist-review.md)
- [pessimist-review.md](./pessimist-review.md)
- [etc.]

Step 8: Report Completion

Display to user:

Architecture Board Complete
============================

Reviewed: [N] docs | [M] personas | [K] total concerns raised

Output written to: architecture-review/YYYY-MM-DD/

Files created:
  ✓ SUMMARY.md — consolidated cross-persona analysis
  ✓ optimist-review.md
  ✓ pessimist-review.md
  ✓ pragmatist-review.md
  [etc.]

Top themes across all personas:
  • [Theme 1] — raised by [N] personas
  • [Theme 2] — raised by [N] personas
  • [Theme 3] — raised by [N] personas

Critical questions requiring answers:
  1. [Top question from SUMMARY]
  2. [Second question]
  3. [Third question]

→ Open architecture-review/YYYY-MM-DD/SUMMARY.md for the full report

Parallel Execution Note

When launching persona agents, use a single message with ALL Task tool calls. Example approach for depth 2 (5 personas):

Launch 5 concurrent Task agents with subagent_type=general-purpose:
- Agent 1: Optimist persona review
- Agent 2: Pessimist persona review
- Agent 3: Pragmatist persona review
- Agent 4: Security Sentinel persona review
- Agent 5: Performance Geek persona review

Each receives the full docs content. Run all 5 in parallel.
Write each result to its respective file as it completes.
After all 5 complete, generate SUMMARY.md.

Do NOT wait for one persona to finish before starting the next. The entire board reviews simultaneously.


Depth Quick Reference

Depth Name Personas Best For
1 standard 3 Quick sanity check, small features
2 deep 5 Most features and projects
3 thorough 7 Core platform work, team-wide systems
4 exhaustive 9 Major architectural decisions
5 complete 11 Foundational infrastructure, greenfield systems

Error Handling

No docs found: Stop and tell user what was searched and where to add docs.

Partial doc coverage: If docs are sparse or incomplete, note this in the board announcement and have personas call out "insufficient information" concerns explicitly.

Agent failure: If a persona agent fails, note it in SUMMARY.md and continue with available reviews. Do not block the entire review on one persona.

Output dir permission error: Display the generated content in the terminal instead, clearly labeled by persona.


Important Notes

  • Do not ask for confirmation before starting — just run it. The skill is designed to be fast and autonomous.
  • Personas must be independent — do not let later agents see earlier agents' output. Each reviewer starts fresh from the docs only.
  • Be specific, not generic — persona prompts must instruct agents to reference actual content from the docs, not generate boilerplate concerns.
  • Respect the date — always timestamp the output folder with today's date to preserve review history across multiple board sessions.
  • The summary is the deliverable — the SUMMARY.md is what the builder reads. Individual files are supporting detail.

Begin execution when invoked with /arch-board.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment