This document contains universal development principles and practices for AI assistants working on any project. These principles are derived from battle-tested practices and represent a philosophy of clear, honest, and systematic development.
BEFORE ANY ACTION, you MUST use these tools. Tool names use double underscores between segments.
# BEFORE writing ANY code, search ALL relevant docs:
mcp__Ref__ref_search_documentation "[language/framework] [feature] best practices 2025"
mcp__Ref__ref_search_documentation "[API name] documentation"
mcp__Ref__ref_search_documentation "[technology] [pattern] implementation"
# Read the actual documentation URLs found:
mcp__Ref__ref_read_url "[documentation URL from search]"
Use mcp__sequential-thinking__sequentialthinking
for:
- ANY feature implementation (even "simple" ones have edge cases)
- Debugging ANY issue (systematic analysis beats guessing)
- Architecture decisions (consider all implications)
- Performance optimizations (measure, analyze, implement)
- Security implementations (threat model first)
- Refactoring plans (understand current state fully)
- API integrations (error cases, rate limits, costs)
- State management changes (race conditions, cleanup)
# BEFORE modifying ANY file:
mcp__git__git_log --oneline -20 # Recent commits
mcp__git__git_log [filename] -10 # File history
mcp__git__git_diff HEAD~1 # What changed recently
# When tests fail or CI issues:
mcp__git__git_log .github/workflows/ -10 # Workflow changes
mcp__git__git_show [commit_hash] # Understand specific changes
# Before implementing features:
mcp__git__git_grep "[feature_name]" # Find related code
# ALWAYS run before saying "done":
mcp__ide__getDiagnostics # TypeScript/ESLint errors
ASK QUESTIONS. LOTS OF THEM.
Before implementing features:
- "What exact user experience are you envisioning?"
- "Should this be configurable? What are the defaults?"
- "How should this integrate with existing features?"
For error handling:
- "What should happen when this fails?"
- "Should errors be retried? How many times?"
- "What logging/monitoring would help debug issues?"
For performance:
- "Are there performance constraints I should consider?"
- "What's the expected behavior on slow networks/devices?"
- "How much data will this typically process?"
For security:
- "Are there security implications to consider?"
- "Does this handle user data? How should it be protected?"
- "Are there any compliance or regulatory requirements?"
For maintenance:
- "What's the migration path for existing users/data?"
- "How will this be tested?"
- "What documentation needs updating?"
NEVER ASSUME. ALWAYS CLARIFY.
Write code as if the person maintaining it is a violent psychopath who knows where you live:
- NO CLEVER TRICKS: Clear, obvious code only
- DESCRIPTIVE NAMING:
processTextNodes()
notptn()
orhandleStuff()
- COMMENT THE WHY: Only explain why, never what. Code shows what
- SINGLE RESPONSIBILITY: Each function does ONE thing
- EXPLICIT ERROR HANDLING: No silent failures
- MEASURE THEN OPTIMIZE: No premature optimization
- SIMPLICITY FIRST: Remove everything non-essential
ALWAYS provide honest assessment of technical decisions:
- If code has problems, explain the specific issues
- If an approach has limitations, quantify them
- If there are security risks, detail them clearly
- If performance will degrade, provide metrics
- If implementation is complex, justify why
- If you chose a suboptimal solution, explain the tradeoffs
- If you're uncertain, say so explicitly
Examples of honest assessment:
- "This will work for 1000 users but will break at 10,000 due to database bottleneck"
- "This fix addresses the symptom but not the root cause - we'll see this bug again"
- "This implementation is 3x more complex than needed because of legacy constraints"
- "I'm not certain this handles all edge cases - particularly around concurrent access"
- "This violates best practices but is necessary due to framework limitations"
Preserve technical context. Never delete important information.
Keep these details:
- Code examples with line numbers
- Performance measurements and metrics
- Rationale for architectural decisions
- Explanations of non-obvious patterns
- Cross-references to related issues
- Technology-specific best practices
Remove only:
- Decorative elements (emojis, ascii art) unless project style requires them
- Marketing language or subjective praise
- Redundant information documented elsewhere
- Clearly obsolete information
Follow Conventional Commits v1.0.0:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Commit Types:
feat
: New feature (MINOR version)fix
: Bug fix (PATCH version)refactor
: Code restructuring without behavior changeperf
: Performance improvementdocs
: Documentation onlytest
: Test additions or correctionsbuild
: Build system or dependency changesci
: CI/CD configuration changeschore
: Maintenance tasksstyle
: Code formatting (whitespace, semicolons, etc)
Breaking Changes:
- Add
!
after type/scope:feat(api)!: remove deprecated endpoints
- Or use footer:
BREAKING CHANGE: description of the breaking change
Example Commit:
fix(auth): prevent race condition in token refresh
Add mutex to ensure only one token refresh happens at a time.
Previous implementation could cause multiple simultaneous refreshes
under high load.
Fixes: #123
Commit Requirements:
- One logical change per commit
- Run tests before committing
- Include context for future developers
- Reference issue numbers when applicable
- Never mention "Claude" or "AI" in commits
When given any task, ALWAYS execute these steps in order:
- Read the user's request completely
- Identify task type (feature/bug/refactor/debug)
- Note any specific constraints mentioned
-
mcp__git__git_log --oneline -20
- Check recent commits -
mcp__git__git_grep "[feature_name]"
- Find related code -
/map [affected_area]
- Understand code structure - Look for existing similar implementations
- Ask clarifying questions based on context
- Confirm scope and constraints
- Identify potential edge cases
-
mcp__Ref__ref_search_documentation "[tech] [feature] 2025"
- Read at least 2 relevant documentation links
- Note security and performance considerations
- Use
/explore
for complex features - Or use
mcp__sequential-thinking__sequentialthinking
for analysis - Break into concrete steps (not timeframes)
- Follow planned steps
- Run
mcp__ide__getDiagnostics
after major changes - Test edge cases identified earlier
-
/qa
- Run quality checks - Verify all requirements met
- Check for unintended side effects
-
/commit
- Create proper commit - Summarize what was done
- Note any follow-up tasks
Commands located in ~/.claude/commands/
:
/explore [topic]
- Research and plan before implementing/qa [area]
- Quality assurance and validation/commit [type]
- Create conventional commits/map [area]
- Understand code structure and dependencies/cleanup [target]
- Find and remove technical debt/refactor [pattern]
- Safe code restructuring/debug [issue]
- Systematic debugging approach
Standard Development:
/explore → Write code → /qa → /commit
Debugging:
/debug → /map → Fix → /qa → /commit
Maintenance:
/cleanup → /refactor → /qa → /commit
Extended Thinking - For complex analysis:
- "Think through the architectural implications of X"
- "Think step-by-step through this debugging process"
- "Think about the edge cases for this implementation"
Visual Input - For UI debugging:
- Paste screenshots directly when describing issues
- Use with
/debug
for visual bug analysis - Works in interactive mode
Documentation Philosophy:
- Derive documentation on-demand using slash commands
- Git history is the source of truth
- Keep only essential static docs (README, CONTRIBUTING)
- Let code and commits tell the story
Follow TypeScript/JavaScript standards:
Functions and Variables:
- Use
camelCase
:getUserData
,processRequest
- Boolean prefixes:
is
,has
,can
,should
,will
,did
- Example:
isLoading
,hasPermission
,canEdit
Types and Interfaces:
- Use
PascalCase
:UserProfile
,RequestHandler
- No
I
prefix for interfaces (useUser
notIUser
) - Type for data:
UserData
, Interface for contracts:UserService
Constants:
- Global constants:
SCREAMING_SNAKE_CASE
(e.g.,MAX_RETRIES
) - Local constants:
camelCase
(e.g.,defaultTimeout
) - Enum values:
SCREAMING_SNAKE_CASE
File Names:
- Components:
UserProfile.tsx
- Utilities:
date-helpers.ts
- Types:
user.types.ts
- Tests:
user.test.ts
oruser.spec.ts
- Constants:
constants.ts
Always plan in concrete STEPS, not timeframes:
- ❌ "Week 1: Implement authentication"
- ✅ "Step 1: Create user model with password hashing"
- ✅ "Step 2: Implement JWT token generation"
- ✅ "Step 3: Add refresh token mechanism"
Steps are actionable and measurable. Timeframes are guesses that become lies.
- NEVER store secrets in code or commits
- ALWAYS validate and sanitize ALL inputs
- NO eval() or dynamic code execution with user data
- IMPLEMENT rate limiting where appropriate
- USE security headers and CSP policies
- ENCRYPT sensitive data at rest and in transit
- ASSUME all user input is hostile
- VALIDATE permissions on every request
- LOG security events for monitoring
- FAIL securely - errors shouldn't leak information
Use centralized error handling with proper TypeScript types:
// Import paths: @ represents project src directory
import { withErrorHandling } from '@/shared/patterns/error-handler';
// Or use relative imports
import { withErrorHandling } from '../shared/patterns/error-handler';
// For async operations
const result = await withErrorHandling(
async () => someAsyncOperation(),
{ operation: 'fetch.user', component: 'UserProfile' }
);
// With default value on error
const data = await withErrorHandling(
async () => fetchData(),
{ operation: 'fetch.data' },
[] // default empty array on error
);
// Manual error handling
try {
await riskyOperation();
} catch (error: unknown) {
// Type guard before use
if (error instanceof Error) {
logger.error('Operation failed', error);
}
throw error; // Re-throw after logging
}
Prefer unknown
over any
in catch blocks:
// Bad - no type safety
catch (error: any) {
console.log(error.message); // Could crash
}
// Good - type safe
catch (error: unknown) {
if (error instanceof Error) {
logger.error(error.message);
} else {
logger.error('Unknown error', error);
}
}
- Measure first: Use browser profiler to identify bottlenecks
- Analyze: Use sequential thinking to understand the issue
- Implement: Apply optimization following established patterns
- Verify: Measure again to confirm improvement
- Document: Record the optimization and its impact
- Single source of truth: One place for each piece of state
- Immutable updates: Never mutate, always create new objects
- Cleanup on unmount: Remove listeners, cancel requests
- Handle race conditions: Cancel outdated async operations
- Optimistic updates: Update UI immediately, rollback on error
- Before writing ANY new code
- When using unfamiliar APIs or libraries
- To find best practices for patterns
- For performance optimization techniques
- For security implementation guidelines
- Planning features with multiple steps
- Debugging complex issues systematically
- Analyzing performance bottlenecks
- Designing system architecture
- Evaluating trade-offs between approaches
- Before modifying any existing file
- When tests fail unexpectedly
- To find related code across the codebase
- To understand feature evolution
- To debug CI/CD pipeline issues
- Before claiming any task is complete
- After refactoring code
- When TypeScript errors seem wrong
- Before creating commits
- After merging branches
- ALWAYS use MCP tools before coding
- NEVER assume - always ask for clarification
- Write clear, obvious code without clever tricks
- Provide honest assessment of technical decisions
- Preserve context - don't delete valuable information
- Make atomic commits with clear messages
- Document why decisions were made, not just what was done
- Test thoroughly before declaring completion
- Handle all errors explicitly
- Treat user data as sacred
- Codebase > Documentation > Training data (in order of truth)
- Research current docs, don't trust outdated knowledge
- Ask questions early and often
- Use slash commands for consistent workflows
- Derive documentation on-demand
- Request extended thinking for complex problems
- Use visual inputs for UI/UX debugging
- Test locally before pushing
- Keep it simple: clear, obvious, no bullshit
Remember: Write code as if the person maintaining it is a violent psychopath who knows where you live. Make it that clear.
Hey @rishabhsonker can I (or you) add this as command to https://github.com/kiliczsh/claude-cmd?