Skip to content

Instantly share code, notes, and snippets.

@samkeen
Last active October 10, 2025 16:53
Show Gist options
  • Save samkeen/0848da3c6af768a81b1c89b7255cec98 to your computer and use it in GitHub Desktop.
Save samkeen/0848da3c6af768a81b1c89b7255cec98 to your computer and use it in GitHub Desktop.

Review Efficiency Prompt Template

Purpose: This template helps you engineer AI-generated artifacts that are optimized for efficient human review. Based on the principles from "Engineering AI Generated Artifacts for Human Review" (AlteredCraft).

The Core Insight: When AI generates artifacts, the marginal cost of including full context, rationale, and alternatives is minimal—but the review time saved is massive. This template embeds those requirements into your prompts.

Our Approach to Context

Advanced context tools exist (MCP servers, vector databases, ...), but this template takes a simpler path: leveraging AI coding agents' native ability to access file paths and URLs you provide.

Tools like Claude Code, Cursor, and similar AI coding environments can directly read files you reference (local and remote). This path-based approach offers key advantages:

  • Transparency: You see exactly what context the AI is using
  • Privacy: Your organizational knowledge stays in your control
  • Simplicity: No additional infrastructure to set up or maintain
  • Portability: Works across different AI tools with minimal changes

You can get remarkably far with this basic mechanism. If you later need advanced retrieval, these prompts adapt easily—but start simple.


Table of Contents

  1. Base Template Structure
  2. Artifact-Specific Adaptations
  3. Customization Guide
  4. Meta-Prompt: Adapting to New Artifact Types

Base Template Structure

This is the foundational structure that applies across artifact types. Each section serves a specific purpose in enabling efficient review.

Generate a [ARTIFACT_TYPE] for [SPECIFIC_TASK] with these requirements:

CONTEXT ARTIFACTS AVAILABLE:
[List whatever organizational knowledge sources you have - formal or informal]

Examples of what to include if you have them:
- Coding standards or style guides (even if informal/undocumented team conventions)
- Past technical decisions or rationale (could be ADRs, wiki pages, Slack threads, or email discussions)
- Technology preferences or "how we do things" docs
- System architecture diagrams or overviews
- README files explaining team practices
- Runbooks or operational knowledge
- Past similar projects or implementations to reference

Don't have formal documentation? Use what you have:
- "See the [project_name] implementation for our approach to similar problems"
- "Reference team discussions in [Slack channel] from [timeframe]"
- "Follow patterns established in [key_file.py]"
- Or simply note: "No formal context artifacts available - use industry best practices"

ARTIFACT REQUIREMENTS:
[Specify the core content the artifact must contain]

DOCUMENTATION REQUIREMENTS FOR REVIEW EFFICIENCY:

1. Decision Transparency
   For each major decision or recommendation, provide:
   - The decision/recommendation made
   - List 1-3 alternatives considered (with brief description)
   - Trade-off analysis (pros/cons of chosen approach)
   - Rationale for why this choice fits our context
   - Citations to relevant context (as hyperlinks when possible, or clear references)

2. Context Integration
   - Include inline citations to supporting organizational context (formal or informal)
   - When referencing "our standards" or "best practices," link to or cite the specific source
   - If no formal docs exist, reference actual implementations: "follows pattern from [file/project]"
   - Keep rationale statements concise (1-2 sentences + reference for details)
   - Use hyperlinks when available to make validation efficient

3. Assumptions and Constraints
   - Explicitly state key assumptions being made
   - Note relevant constraints (technical, business, timeline)
   - Flag areas with uncertainty or incomplete information
   - Call out where you're inferring team preferences vs. using documented standards

FINAL STEP - REVIEWER FAQ:
After generating the artifact, analyze it from a reviewer's perspective.
Generate an FAQ section addressing 5-7 questions a senior engineer would
likely ask when reviewing this artifact. Structure each FAQ entry as:
- Question: [Anticipated question]
- Answer: [Clear, concise answer with citations to the artifact and/or organizational context]

Focus FAQ on:
- Clarifying complex decisions
- Addressing edge cases or scale concerns
- Connecting to broader system implications
- Justifying departures from standard approaches (or industry norms if no team standards exist)

Why This Structure Works

  • Decision Transparency: Enables reviewers to validate decisions against context rather than reconstruct reasoning
  • Context Integration: Hyperlinks make verification instant instead of requiring archaeology through docs
  • Assumptions/Constraints: Surfaces potential misalignments early
  • Reviewer FAQ: Pre-answers common questions, dramatically reducing review roundtrips

Artifact-Specific Adaptations

Each artifact type has different review needs. These adaptations show how to customize the base template.

1. Technical Requirements Documents

Review Goal: Validate that proposed architecture and implementation plan align with technical standards, constraints, and team capabilities.

Generate a technical requirements document for [FEATURE/SYSTEM] with these requirements:

CONTEXT ARTIFACTS AVAILABLE:
Use whatever context you have available. Examples might include:
- Coding standards or conventions (formal docs, wiki pages, or "how we do it" READMEs)
- Technology stack preferences or rationale (could be documented decisions or past implementations)
- Past architectural decisions (ADRs if you have them, or references to similar features/projects)
- System architecture overviews (diagrams, docs, or key files that show the big picture)
- Performance/scale requirements (formal SLAs, or past discussions about constraints)
- Similar past implementations to reference

ARTIFACT REQUIREMENTS:
1. Architecture Overview
   - High-level component diagram description
   - Technology stack with versions
   - Integration points with existing systems

2. Implementation Plan
   - Phased approach with milestones
   - Dependencies and prerequisites
   - Estimated effort and timeline

3. Code Examples
   - Key implementation patterns (pseudocode or actual code)
   - Error handling approach
   - Testing strategy

DOCUMENTATION REQUIREMENTS FOR REVIEW EFFICIENCY:

1. Architectural Decision Transparency
   For EACH major architectural choice (e.g., database, communication pattern,
   framework, deployment model), provide:
   - The decision made
   - 2-3 alternatives considered (e.g., "Considered: PostgreSQL, MongoDB, DynamoDB")
   - Trade-off analysis specific to our scale and constraints
   - Why this fits our team's expertise and infrastructure
   - Citations to relevant context (ADRs, past decisions, similar implementations, or team discussions)

2. Context Integration
   - Link to or reference whatever context sources you have (standards docs, past implementations, discussions)
   - If you have formal ADRs, reference them; if not, cite similar past problems and how they were solved
   - Cite rationale for tech choices (from docs, past projects, or team discussions)
   - Include references to related system components (links, file paths, or project names)

3. Implementation Reality Check
   - Note team expertise gaps and mitigation plans
   - Identify infrastructure changes needed
   - Flag potential performance bottlenecks with scale estimates
   - List external dependencies and their stability

FINAL STEP - REVIEWER FAQ:
After generating the artifact, analyze it from a reviewer's perspective.
Generate an FAQ section addressing 5-7 questions a senior engineer would
likely ask when reviewing this artifact. Structure each FAQ entry as:
- Question: [Anticipated question]
- Answer: [Clear, concise answer with citations to the artifact and/or organizational context]

Focus FAQ on:
- Clarifying complex decisions
- Addressing edge cases or scale concerns
- Connecting to broader system implications
- Justifying departures from standard approaches (or industry norms if no team standards exist)

Example Usage Context:

  • Feature: Real-time notifications system
  • Feature: Data pipeline for analytics
  • Feature: Authentication service migration

2. Pull Request Descriptions

Review Goal: Enable reviewers to understand the change, validate approach, and assess risks quickly without reading all the code first.

Generate a pull request description for [CODE_CHANGE_SUMMARY] with these requirements:

CONTEXT ARTIFACTS AVAILABLE:
Use whatever context you have available. Examples might include:
- Coding standards or PR guidelines (formal or informal)
- Related issue/ticket (link or reference)
- Design doc if one exists (or reference to discussion where approach was decided)
- Similar past PRs that established patterns
- Files or components this change relates to

ARTIFACT REQUIREMENTS:
1. Change Summary
   - What changed (high-level)
   - Why this change was needed
   - User/system impact

2. Implementation Approach
   - Key technical decisions made
   - Files/components modified
   - Testing performed

DOCUMENTATION REQUIREMENTS FOR REVIEW EFFICIENCY:

1. Approach Transparency
   For significant implementation decisions (not trivial code changes), provide:
   - The approach taken
   - 1-2 alternative approaches considered
   - Why this approach was chosen (performance, maintainability, simplicity, etc.)
   - Citations to coding standards or design patterns used

2. Context Integration
   - Link to or reference related issue/ticket/discussion
   - Reference design doc if one exists (or note where approach was discussed/decided)
   - Cite relevant past decisions or patterns being followed
   - Link to related PRs if part of larger effort

3. Review Guidance
   - Highlight areas that need careful review
   - Note any deliberate departures from standards (with justification)
   - Flag performance implications or potential risks
   - Specify what testing was performed and what reviewers should verify

4. Change Scope
   - Estimated lines changed (auto-included by git, but contextualize if large)
   - Note refactoring vs. new functionality
   - List files that are mechanical changes vs. substantive logic changes

FINAL STEP - REVIEWER FAQ:
After generating the artifact, analyze it from a reviewer's perspective.
Generate an FAQ section addressing 5-7 questions a senior engineer would
likely ask when reviewing this artifact. Structure each FAQ entry as:
- Question: [Anticipated question]
- Answer: [Clear, concise answer with citations to the artifact and/or organizational context]

Focus FAQ on:
- Clarifying complex decisions
- Addressing edge cases or scale concerns
- Connecting to broader system implications
- Justifying departures from standard approaches (or industry norms if no team standards exist)

Example Usage Context:

  • Feature implementation PRs
  • Refactoring PRs
  • Bug fix PRs with non-obvious solutions
  • Performance optimization PRs

Note: For trivial PRs (typo fixes, dependency bumps, etc.), a simpler description is appropriate. Use this template for non-trivial changes.


Customization Guide

Every team has unique context, standards, and review needs. Here's how to adapt these templates for your organization.

1. Identify Your Context Artifacts

The templates reference organizational knowledge sources. Work with whatever you have—formal or informal.

Start by listing what you actually have (not what you wish you had):

  • Coding standards or style guides (even if just "how we do it around here")
  • Past technical decisions and their rationale (ADRs, wiki pages, Slack threads, README sections)
  • Technology preferences (documented or just "we always use X because...")
  • System architecture overviews (diagrams, key README files, onboarding docs)
  • Team conventions that aren't written down (these can be described in the prompt)
  • Key implementations that set patterns (e.g., "see the auth service implementation")
  • Process knowledge (testing approaches, deployment practices)

Make context AI-accessible (in order of effort):

Easiest - Reference what exists:

  • Point to README files, code comments, or key files
  • Reference Slack conversations or email threads (with dates/participants)
  • Cite specific past projects or PRs that established patterns

Medium - Consolidate scattered knowledge:

  • Create simple markdown files capturing "how we do things"
  • Document informal team conventions
  • Link to or excerpt relevant Slack discussions

Most thorough - Build formal documentation:

  • Create proper ADR collections
  • Write comprehensive standards docs
  • Build searchable wikis

Example - Team with minimal formal docs:

CONTEXT ARTIFACTS AVAILABLE:
- README.md in each service: Documents our per-service conventions
- Past decision discussion in #architecture Slack channel (2024-Q3)
- The "payments-service" project: Established our microservice patterns
- Tech stack: Python/Flask, PostgreSQL, Redis (chosen for team familiarity)
- Deployment: GitHub Actions to AWS ECS (see .github/workflows/)
- For patterns: Follow what payments-service does unless there's a reason not to

Example - Team with more formal docs: Note: These could be in a monorepo, internal wiki, or alternate repositories.

CONTEXT ARTIFACTS AVAILABLE:
- docs/standards/coding.md: Code style, patterns, and quality requirements
- docs/standards/architecture.md: Architectural principles we follow
- docs/adrs/: Past architectural decisions (using ADR format)
- docs/tech_stack.md: Approved technologies with rationale
- docs/deployment.md: How we deploy, test, and monitor
- runbooks/: Operational knowledge for production systems

The key: Use what you have. AI can work with references to existing code, past discussions, or informal conventions—not just formal documentation.

2. Adjust Based on Feedback

After using these prompts, collect feedback:

What to track:

  • Review time (before/after using template)
  • Number of review roundtrips (comments requiring clarification)
  • Reviewer confidence in approving
  • Common questions still arising (add to FAQ prompt)

Iterate monthly:

  • Add FAQ prompts for questions that still come up
  • Remove FAQ prompts for questions being answered well
  • Adjust "alternatives considered" requirements based on decision quality
  • Refine context artifact descriptions for better AI utilization

5. Version Control Your Prompts

Treat these prompts as code:

your-repo/
├── .prompts/
│   ├── tech_requirements.md
│   ├── pr_description.md
│   ├── adr.md
│   ├── api_spec.md
│   └── CHANGELOG.md
  • Store prompts in version control
  • Document changes in CHANGELOG.md
  • Review prompt changes like code changes
  • A/B test prompt modifications before rolling out

Note: An alternative would be to store them as commands in the case of Claude Code or similar tools.


Additional Resources


Questions or Feedback?

This template is evolving based on real-world usage. If you:

  • Adapt this for a new artifact type successfully
  • Discover review-efficiency patterns not covered here
  • Find sections that need clarification

Share your learnings! This helps the broader community engineer better AI artifacts.


Remember: The goal isn't perfect artifacts. It's artifacts that respect reviewer time by making validation efficient instead of exhaustive.

Time investment: 15-30 minutes to adapt a prompt Review time saved: 15-30+ minutes per artifact Break-even: First use 10x ROI: After 10 uses, you've transformed your review process

Start Monday. Pick one artifact type. Enhance the prompt. Review the difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment