You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file provides baseline guidance for Claude Code (claude.ai/code) when working with code in any repository.
Memory Hierarchy
Claude Code loads memory in this order (most specific wins):
Project rules — .claude/rules/*.md (topic-specific, may be path-scoped)
Project memory — ./CLAUDE.md or ./.claude/CLAUDE.md
User memory — ~/.claude/CLAUDE.md (this file — applies to all projects)
Local overrides — CLAUDE.local.md (machine-specific or personal; auto-gitignored)
Project-specific instructions OVERRIDE these global instructions when they conflict.
Always check ./CLAUDE.md and ./.claude/rules/ before acting on unfamiliar projects.
Global Guidelines
General Behavior
Don't use emojis when creating markdown unless absolutely necessary
Do what has been asked; nothing more, nothing less
When the user asks you to do something, DO IT. Do not over-plan, ask excessive clarifying questions, or spend time exploring when the task is clear. If the user asks for a migration, generate the migration. If they ask for a fix, fix it.
NEVER create files unless they're absolutely necessary for achieving your goal
ALWAYS prefer editing an existing file to creating a new one
NEVER proactively create documentation files (*.md) or README files unless explicitly requested
Pull/Merge Requests
When creating a pull/merge request, update the necessary docs and changelogs to reflect new changes and features
Follow the project's commit message conventions (check existing commits)
Reference relevant issues/tickets in commit messages
Code Documentation
Always add JSDoc comments to functions, methods, types, and interfaces
Focus on providing context that enhances understanding for AI coding assistants
Minimize token usage while maintaining clarity
Prioritize comments that explain the "why" behind complex logic, business rules, and non-obvious decisions
Use the code-comments agent when available for TypeScript/JavaScript code
Project-Specific Guides
When documentation is requested, check for project-specific documentation tools
Follow the project's documentation structure and conventions
Git Workflow
When asked to commit changes, always stage and commit ALL modified files in a single commit unless explicitly told otherwise. Do not leave uncommitted files requiring a second prompt.
Planning & Implementation
When in plan mode, NEVER start implementing code until the user explicitly approves the plan. Present the plan, ask for approval, and wait. Do not exit plan mode on your own.
Debugging
When asked to fix a bug, investigate the root cause FIRST before attempting fixes. Use Chrome DevTools, database queries, or runtime inspection to understand the actual state before writing code. Do not guess at fixes iteratively.
Security Reminders
Never commit sensitive information (API keys, passwords, tokens)
Validate and sanitize user input
Check authentication/authorization before protected operations
Follow principle of least privilege
Testing
Write tests for new functionality when test infrastructure exists
Update existing tests when modifying code
Follow project's testing conventions (check existing test files)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
**If plan mode is active, STOP immediately.** Do not proceed to any step below. Instead, respond:
> "The plugin-creator skill writes files and directories, which is not allowed in plan mode. Please exit plan mode first (`/exit-plan`), then invoke this skill again."
Error in user YAML: (<unknown>): block sequence entries are not allowed in this context at line 2 column 33
---
description: Stress-test a plan with a structured interview across technical, UX, edge case, and out-of-scope domainsargument-hint: [plan-file-path] - omit to auto-detect from IDE or ~/.claude/plans/allowed-tools: Read, Glob, AskUserQuestion, Write, Edit
---
/plan-interview
Stress-test a plan through a structured conversational interview before implementation begins.
Usage
/plan-interview # auto-detects from IDE or latest in ~/.claude/plans/
/plan-interview ~/.claude/plans/my-feature.md # use a specific plan file
Instructions
Step 1 — Resolve the plan file
Use the first match from this priority order:
Explicit argument: If $ARGUMENTS is provided, treat it as the file path and read it directly.
Currently open file: If no argument is given, check whether a file is currently open or selected in the IDE (provided via context). If it exists, is a .md file, and its content looks like a plan (contains headings like ## Implementation, ## Plan, ## Steps, ## Instructions, or similar structural markers), use it.
Project-level settings: Read .claude/settings.json in the current project directory. If a "plansDirectory" key exists, glob *.md files from that path and use the most recently modified file. This takes precedence over the global config in step 4.
Latest plan in ~/.claude/plans/: If none of the above applies, use Glob on ~/.claude/plans/*.md, sort by modification time, and select the most recently modified file.
Once resolved, tell the user which file will be used (e.g., "Interviewing plan: ~/.claude/plans/my-feature.md") before proceeding.
If no plan file can be found via any of these methods, tell the user and stop.
Step 2 — Read and analyze the plan
Read the resolved plan file. Extract the following to guide question generation:
Goal: What is being built and why?
Key components: What files, services, or systems are involved?
If any UI signals are detected, always include Round 2 — even for plans classified as short/focused. When triggering Round 2 on a short plan, briefly note what was detected (e.g., "Running Round 2 — plan references React components and .tsx files") so the user understands why.
Step 3 — Conduct the structured interview
Generate questions dynamically from the plan content — do not use generic or hardcoded questions. Each AskUserQuestion call may include up to 4 questions.
Round 1 — Technical & Trade-offs (always run):
Ask up to 4 questions covering:
The most uncertain architectural or implementation decision in the plan
Build vs. buy, library choice, or API design trade-offs
Performance, scalability, or data model concerns specific to this plan
Any unclear integration points or dependencies
Use multiSelect: true for questions where the user may want to flag multiple concerns (e.g., "Which of these areas need more investigation?").
Round 2a — UI/UX & Flows (run for medium and complex plans, or any plan with UI involvement — see Step 2):
Ask up to 4 questions covering:
User flows: happy path, error states, loading states, empty states
Mobile or responsive behavior concerns
Motion and animation: prefers-reduced-motion, transitions, focus indicators after animation
Any UI state not covered by the plan (e.g., skeleton loading, optimistic updates, error recovery)
Round 2b — Accessibility & Semantic Structure (run immediately after Round 2a when Round 2 is triggered):
Screen reader support: ARIA roles, labels, aria-describedby for errors, live regions
WCAG 2.1 AA compliance: color contrast (4.5:1 text, 3:1 UI), touch targets (44×44px min)
Semantic HTML: heading hierarchy, landmark regions, form label association
Round 3 — Edge Cases & Best Practices (run for complex plans only):
Ask up to 4 questions covering:
Critical failure modes or race conditions
Concurrent user scenarios or data conflicts
Regression risks: which existing tests might break, what backward-compatibility contracts exist (API shape, component props, data schema), and whether visual or behavioral regression testing is in place
Which best practices should guide implementation: security, performance, test coverage, DX
Any remaining open questions from the plan that haven't been addressed
Step 4 — Surface out-of-scope concerns
After the structured rounds, review the full plan one more time and identify any issues that were not covered by the interview questions. These are concerns you observed independently — not topics already raised by the user. Look for:
Missing sections a plan of this type would normally include (e.g., rollback strategy, auth/permissions, data migration, monitoring)
Implicit assumptions in the plan that could silently break implementation
Ownership or responsibility gaps (who handles what is unclear)
Naming, scope, or intent ambiguities that could cause misalignment during implementation
Risks that fall outside the Technical / UI / Edge Case domains
Regression blind spots: the plan does not identify which existing tests, API contracts, or user-visible behaviors could break
If any out-of-scope concerns exist, present them as a clearly labelled section in the chat before the summary:
### Additional Concerns (Outside Structured Rounds)-[Concern 1]: [Brief explanation of why this matters]-[Concern 2]: [Brief explanation of why this matters]
If no additional concerns exist, skip this section silently.
Complexity Check (always run):
After the out-of-scope scan, evaluate the proposed approach against what the simplest working solution would look like. For each element that appears over-engineered, ask: Could a built-in, a single function, or a native API replace this abstraction? Only surface real issues — do not flag complexity for its own sake on genuinely complex plans. Only name a simpler alternative when one is clearly apparent; omit concerns where no obvious alternative exists.
If any complexity concerns are found, present them under a clearly labelled section:
Skip this section silently if no complexity concerns are found.
Step 5 — Compile and present the review summary
After all rounds and the out-of-scope check are complete, output a structured summary in the chat:
## Plan Interview Summary### Key Decisions Confirmed[List decisions the user confirmed or clarified during the interview]### Open Risks & Concerns[List risks, unknowns, or concerns surfaced — with brief context]### Recommended Next Steps[Amendments to the plan, additional spikes, or clarifications needed before implementation]### Simplification Opportunities[Concise list of areas where the plan can be reduced in scope or abstraction, with specific simpler alternatives — omit this section if no complexity concerns were found]
Step 6 — Offer to save findings
After presenting the summary, ask the user:
"Would you like me to append this interview summary to the plan file?"
Do not write to the plan file unless the user explicitly confirms. If they confirm, append the summary as a new ## Interview Summary section at the end of the plan file using the Edit tool.
Avoid long paragraphs; Always break the plan into small, concise, numbered,
testable steps.
Use bullet points or numbered lists for clarity.
Scope discipline: Only plan what was explicitly requested. Do not add
enhancements, refactors, or improvements beyond the stated task. If something
feels related but wasn't asked for, put it in Next Steps instead.
At the end of each plan, give me a list of unresolved questions to answer, if
any.
Always rename the plan file to reflect the plan's purpose clearly.
Always use the ExitPlanMode tool to request plan approval — never ask via text
or AskUserQuestion.
Always commit plans to version control.
Plan Structure: Next Steps
Every plan must end with a Next Steps section listing optional follow-up
work that is outside the current scope but worth considering later.
Next Steps items should be brief (one line each) and clearly marked as
out-of-scope for the current task.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Use when the user asks to reproduce a bug with a failing test then fix it in a test-driven loop, "TDD fix", "write a red test then make it green", or wants an autonomous red-green cycle capped at N iterations. Does not design tests from scratch — use code-testing-agent for that. Does not review test quality — use reviewing-tests for that.
Given a bug description, write a failing test that reproduces it, then
enter an autonomous loop — run tests, analyze failures, edit code, re-run —
until green or 10 iterations. Log each iteration's hypothesis. After
passing, run the full suite, commit with a fix: prefix, and open a PR.
Freedom level: Strict — Follow these steps in order. Do not skip or
combine steps. Stop at each hard-stop marker.
Use TodoWrite to create todos for Steps 1–9, all status: "pending".
Mark each status: "completed" as you finish it.
Step 1: Parse Bug Description
Extract from the invocation message:
Field
What to extract
Symptom
What the code currently does wrong
Expected behavior
What it should do instead
Affected file(s)
Source file(s) containing the bug — explicit path, or inferred from symptom
Test file
Corresponding test file (infer from naming convention if not given)
If any field is missing and cannot be inferred, use AskUserQuestion
with a focused question for the missing field only. Do not ask for everything
at once.
Do not open any source file yet. Only parse the message.
If multiple candidates exist, prefer the one closest in the directory tree
to the affected source file.
Read the test file (Read) to understand its structure, assertion
style, and import pattern.
Append a new test case that will fail because of the bug. Use Edit
(not Write) to add to the existing file. The test must:
Target exactly the behavior described in Step 1
Use the project's existing assertion style
Include a comment # tdd-fix: reproducing <symptom> (or language
equivalent) to identify it later
Do NOT edit any production code in this step.
Run the test once (Bash) to confirm it fails. If it unexpectedly
passes, stop and use AskUserQuestion:
"The new test passed without any code changes — the bug may already be
fixed, or the test may not be reproducing it correctly. How do you want
to proceed?"
Step 3: Autonomous Fix Loop (max 10 iterations)
Initialize an iteration log. Render it as a markdown table and update it
live after each iteration:
| # | Hypothesis | Change Made | Result |
|---|------------|-------------|--------|
For each iteration i from 1 to 10:
3a — Form a hypothesis
Read the failure output from the previous run (or from Step 2 on iteration
1). In one sentence, state why the test is failing and what in the
production code is responsible. Write the hypothesis to the iteration log.
Examples of well-formed hypotheses:
"Operator in add() is subtraction, not addition."
"parseDate does not handle the Z timezone suffix."
"Off-by-one: loop ends at < n but should be <= n."
3b — Edit the production file
Use Edit (not Write) to apply the minimal change implied by the
hypothesis. Record a one-line diff summary in the log.
Do not refactor unrelated code. Do not add unrelated tests. Change only
what the hypothesis requires.
3c — Run the scoped test
Run only the test written in Step 2 via Bash. Record the result (PASS
or FAIL + excerpt) in the iteration log.
If PASS: exit the loop and proceed to Step 5.
If FAIL: if i < 10, increment and go to 3a. If i == 10, proceed
to Step 4.
Show the updated iteration log after every iteration.
Step 4: Hard Cap — Loop Exhausted
If the loop reaches 10 iterations without a passing test:
Print the full iteration log.
Surface the last three hypotheses and why each failed.
Output:
tdd-fix stopped after 10 iterations. The test is still failing.
No commit or PR will be created.
Suggestions for next steps:
- Review the iteration log above for patterns.
- Consider whether the bug is in a different file than expected.
- The test file and any partial edits remain on disk for manual inspection.
STOP. Do not commit, do not open a PR.
Step 5: Regression Sweep
Once the scoped test passes, run the full test suite with no scope
filter. Use Bash with the appropriate full-suite command for the project:
Node/JS: npm test, yarn test, pnpm test, or npx vitest run
Python: pytest, python -m pytest
Go: go test ./...
Shell: run the top-level test runner script if one exists
If any previously-passing test now fails:
Report the regressions — test names and failure excerpts.
Do not commit.
Output:
Regression detected. The fix broke existing tests (listed above).
No commit or PR will be created. The changes remain on disk.
STOP.
If all tests pass, continue to Step 6.
Step 6: Summarize the Fix
Print a summary block before committing:
## tdd-fix Summary
Bug: <symptom from Step 1>
Fix: <final hypothesis from Step 3>
Iterations: <i of 10>
Files changed:
- <production file(s) edited>
- <test file appended>
Full suite: PASS
Step 7: Commit via commit-agent
Invoke the commit-agent skill. When it drafts the commit message, ensure:
Type is fix
Scope is the most-changed top-level directory
Description summarizes the symptom in imperative mood
Example: fix(tests/demo): correct add() operator from subtraction to addition
The commit-agent skill handles staging, pre-commit hooks, and conventional
format — do not duplicate that logic here.
Step 8: Open PR via pr-agent
Invoke the pr-agent skill. When it drafts the PR body, include the
iteration log from Step 3 under a ## How it was found (tdd-fix) section.
The pr-agent skill handles push, platform detection (GitHub/GitLab), and
branch checks — do not duplicate that logic here.
Step 9: Stop
STOP here. Do not analyze code further, do not re-run tests, do not
suggest refactors or cleanup, do not open additional issues. The fix is
complete when the PR URL is returned by pr-agent.