Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save mvandermeulen/2e2c032f9283ee5587a79fd2b926a52a to your computer and use it in GitHub Desktop.
Save mvandermeulen/2e2c032f9283ee5587a79fd2b926a52a to your computer and use it in GitHub Desktop.
claude code
name description
Direct Dev
Concise, critical, no-nonsense responses for software development

You are Claude Code with a direct, development-focused communication style.

Core Communication Rules

Be Direct: Cut straight to the point. No preambles, pleasantries, or buffer language.

Be Critical: Point out problems, inefficiencies, and issues without softening the language. Say "This is broken" not "This might need some improvement."

Be Concise: Default to brief, actionable responses. Only expand when explicitly asked for details.

Be Bold: Make clear statements. Use "You should" not "You might want to consider."

Be Upfront: Address the most critical issues first. Don't bury important information.

Be Honest: Be transparent about scope of your work and clearly and bluntly mention what are all the tasks you peformed and what tasks you missed and why you missed it.

Negative First: Instead of focusing first on what you achieved, tell what you missed or didn't do as planned.

Response Structure

  • Lead with the main point or solution
  • Use bullet points for multiple items
  • Include only essential context
  • End when the answer is complete - no summaries or sign-offs

What to Avoid

  • Apologetic language ("Sorry, but...")
  • Hedging ("It seems like..." "Perhaps...")
  • Unnecessary politeness markers
  • Verbose explanations unless requested
  • Sugar-coating problems or bad code

Development Focus

Prioritize:

  • Code functionality and correctness
  • Performance and efficiency concerns
  • Security vulnerabilities
  • Maintainability issues
  • Direct solutions and fixes

Treat code quality as non-negotiable. Flag technical debt, poor patterns, and shortcuts immediately.

#!/usr/bin/env python3
"""
Extract messages from Claude Code JSONL conversation files.
This script reads JSONL files containing Claude Code conversation transcripts
and extracts just the messages in a readable format.
"""
import json
import sys
import argparse
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Any, Optional
def extract_text_from_content(content: Any) -> str:
"""Extract text from various content structures."""
if isinstance(content, str):
return content
elif isinstance(content, list):
texts = []
for item in content:
if isinstance(item, dict):
if 'text' in item:
texts.append(item['text'])
elif 'content' in item:
texts.append(extract_text_from_content(item['content']))
elif isinstance(item, str):
texts.append(item)
return '\n'.join(texts)
elif isinstance(content, dict):
if 'text' in content:
return content['text']
elif 'content' in content:
return extract_text_from_content(content['content'])
return str(content)
def format_timestamp(timestamp_str: str) -> str:
"""Format ISO timestamp to readable format."""
try:
dt = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
return dt.strftime('%Y-%m-%d %H:%M:%S')
except:
return timestamp_str
def extract_messages(jsonl_file: Path, include_meta: bool = False, verbose: bool = False) -> List[Dict[str, Any]]:
"""
Extract messages from a JSONL file.
Args:
jsonl_file: Path to the JSONL file
include_meta: Whether to include meta messages
verbose: Whether to include additional metadata
Returns:
List of extracted messages
"""
messages = []
with open(jsonl_file, 'r', encoding='utf-8') as f:
for line_num, line in enumerate(f, 1):
if not line.strip():
continue
try:
data = json.loads(line)
# Skip meta messages unless requested
if data.get('isMeta') and not include_meta:
continue
# Extract message data
if 'message' in data:
msg_data = data['message']
# Determine message type and role
msg_type = data.get('type', 'unknown')
role = msg_data.get('role', msg_type)
# Extract content
content = msg_data.get('content', '')
text = extract_text_from_content(content)
# Build message entry
message = {
'line': line_num,
'type': msg_type,
'role': role,
'text': text,
'timestamp': format_timestamp(data.get('timestamp', ''))
}
# Add verbose metadata if requested
if verbose:
message['uuid'] = data.get('uuid', '')
message['session_id'] = data.get('sessionId', '')
message['is_compact_summary'] = data.get('isCompactSummary', False)
message['is_meta'] = data.get('isMeta', False)
messages.append(message)
except json.JSONDecodeError as e:
print(f"Error parsing line {line_num}: {e}", file=sys.stderr)
except Exception as e:
print(f"Error processing line {line_num}: {e}", file=sys.stderr)
return messages
def print_messages(messages: List[Dict[str, Any]], format_type: str = 'readable'):
"""
Print extracted messages in various formats.
Args:
messages: List of extracted messages
format_type: Output format ('readable', 'json', 'compact')
"""
if format_type == 'json':
print(json.dumps(messages, indent=2))
elif format_type == 'compact':
for msg in messages:
role = msg['role'].upper()
text = msg['text'][:100] + '...' if len(msg['text']) > 100 else msg['text']
text = text.replace('\n', ' ')
print(f"[{msg['timestamp']}] {role}: {text}")
else: # readable
for i, msg in enumerate(messages, 1):
print(f"\n{'='*80}")
print(f"Message #{i} (Line {msg['line']})")
print(f"Timestamp: {msg['timestamp']}")
print(f"Type: {msg['type']} | Role: {msg['role']}")
print(f"{'-'*80}")
print(msg['text'])
print(f"\n{'='*80}")
print(f"Total messages: {len(messages)}")
def main():
"""Main entry point for the script."""
parser = argparse.ArgumentParser(
description='Extract messages from Claude Code JSONL conversation files.',
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Extract messages in readable format
python extract_messages.py conversation.jsonl
# Include meta messages
python extract_messages.py conversation.jsonl --include-meta
# Output as JSON
python extract_messages.py conversation.jsonl --format json
# Compact one-line format
python extract_messages.py conversation.jsonl --format compact
# Verbose mode with additional metadata
python extract_messages.py conversation.jsonl --verbose
# Save output to file
python extract_messages.py conversation.jsonl > messages.txt
"""
)
parser.add_argument(
'jsonl_file',
type=Path,
help='Path to the JSONL file containing conversation data'
)
parser.add_argument(
'--include-meta',
action='store_true',
help='Include meta messages in the output'
)
parser.add_argument(
'--format',
choices=['readable', 'json', 'compact'],
default='readable',
help='Output format (default: readable)'
)
parser.add_argument(
'--verbose',
action='store_true',
help='Include additional metadata in output'
)
parser.add_argument(
'--filter-role',
help='Filter messages by role (e.g., user, assistant)'
)
parser.add_argument(
'--filter-type',
help='Filter messages by type (e.g., user, assistant)'
)
args = parser.parse_args()
# Validate input file
if not args.jsonl_file.exists():
print(f"Error: File '{args.jsonl_file}' not found", file=sys.stderr)
sys.exit(1)
# Extract messages
messages = extract_messages(
args.jsonl_file,
include_meta=args.include_meta,
verbose=args.verbose
)
# Apply filters if specified
if args.filter_role:
messages = [m for m in messages if m['role'] == args.filter_role]
if args.filter_type:
messages = [m for m in messages if m['type'] == args.filter_type]
# Print messages
if messages:
print_messages(messages, args.format)
else:
print("No messages found matching the criteria.", file=sys.stderr)
if __name__ == '__main__':
main()
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.11"
# dependencies = [
# "python-dotenv",
# ]
# ///
import argparse
import json
import os
import sys
from pathlib import Path
from datetime import datetime
from typing import Dict, List, Any, Optional
try:
from dotenv import load_dotenv
load_dotenv()
except ImportError:
pass # dotenv is optional
def extract_text_from_content(content: Any) -> str:
"""Extract text from various content structures."""
if isinstance(content, str):
return content
elif isinstance(content, list):
texts = []
for item in content:
if isinstance(item, dict):
if 'text' in item:
texts.append(item['text'])
elif 'content' in item:
texts.append(extract_text_from_content(item['content']))
elif isinstance(item, str):
texts.append(item)
return '\n'.join(texts)
elif isinstance(content, dict):
if 'text' in content:
return content['text']
elif 'content' in content:
return extract_text_from_content(content['content'])
return str(content)
def format_timestamp(timestamp_str: str) -> str:
"""Format ISO timestamp to readable format."""
try:
dt = datetime.fromisoformat(timestamp_str.replace('Z', '+00:00'))
return dt.strftime('%Y-%m-%d %H:%M:%S')
except:
return timestamp_str
def extract_messages(jsonl_file: Path, include_meta: bool = False) -> List[Dict[str, Any]]:
"""
Extract messages from a JSONL file.
Args:
jsonl_file: Path to the JSONL file
include_meta: Whether to include meta messages
Returns:
List of extracted messages
"""
messages = []
try:
with open(jsonl_file, 'r', encoding='utf-8') as f:
for line_num, line in enumerate(f, 1):
if not line.strip():
continue
try:
data = json.loads(line)
# Skip meta messages unless requested
if data.get('isMeta') and not include_meta:
continue
# Extract message data
if 'message' in data:
msg_data = data['message']
# Determine message type and role
msg_type = data.get('type', 'unknown')
role = msg_data.get('role', msg_type)
# Extract content
content = msg_data.get('content', '')
text = extract_text_from_content(content)
# Build message entry
message = {
'line': line_num,
'type': msg_type,
'role': role,
'text': text,
'timestamp': format_timestamp(data.get('timestamp', ''))
}
messages.append(message)
except json.JSONDecodeError:
pass
except Exception:
pass
except Exception:
pass
return messages
def log_pre_compact(input_data):
"""Log pre-compact event to logs directory."""
# Ensure logs directory exists
log_dir = Path("logs")
log_dir.mkdir(parents=True, exist_ok=True)
log_file = log_dir / 'pre_compact.json'
# Read existing log data or initialize empty list
if log_file.exists():
with open(log_file, 'r') as f:
try:
log_data = json.load(f)
except (json.JSONDecodeError, ValueError):
log_data = []
else:
log_data = []
# Append the entire input data
log_data.append(input_data)
# Write back to file with formatting
with open(log_file, 'w') as f:
json.dump(log_data, f, indent=2)
def extract_and_backup_messages(transcript_path, trigger):
"""Extract messages from transcript and save them as backup before compaction."""
try:
if not os.path.exists(transcript_path):
return None
# Extract messages from the transcript
messages = extract_messages(Path(transcript_path), include_meta=False)
if not messages:
return None
# Create backup directory for extracted messages
backup_dir = Path(".dev-resources") / "context" / "messages_backup"
backup_dir.mkdir(parents=True, exist_ok=True)
# Generate backup filename with timestamp and trigger type
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
session_name = Path(transcript_path).stem
backup_name = f"{session_name}_messages_{trigger}_{timestamp}.json"
backup_path = backup_dir / backup_name
# Save extracted messages as JSON
with open(backup_path, 'w', encoding='utf-8') as f:
json.dump(messages, f, indent=2, ensure_ascii=False)
return str(backup_path)
except Exception:
return None
def main():
try:
# Parse command line arguments
parser = argparse.ArgumentParser()
args = parser.parse_args()
# Read JSON input from stdin
input_data = json.loads(sys.stdin.read())
# Extract fields
session_id = input_data.get('session_id', 'unknown')
transcript_path = input_data.get('transcript_path', '')
trigger = input_data.get('trigger', 'unknown') # "manual" or "auto"
custom_instructions = input_data.get('custom_instructions', '')
# Log the pre-compact event
log_pre_compact(input_data)
# Extract and backup messages
backup_path = None
if transcript_path:
backup_path = extract_and_backup_messages(transcript_path, trigger)
# Decision block: Block auto compaction
reason = f"Compaction blocked by precompact hook. Use manual compaction if needed."
if backup_path:
reason += f" Messages backed up to: {backup_path}"
result = {
"decision": "block",
"reason": reason
}
print(json.dumps(result))
sys.exit(0)
except json.JSONDecodeError:
# Handle JSON decode errors gracefully
sys.exit(2)
except Exception:
# Handle any other errors gracefully
sys.exit(2)
if __name__ == '__main__':
main()
{
"$schema": "https://json.schemastore.org/claude-code-settings.json",
"permissions": {
"allow": [
"Bash",
"Read",
"Edit",
"Write",
"WebFetch",
"Grep",
"Glob",
"LS",
"MultiEdit",
"NotebookRead",
"NotebookEdit",
"TodoRead",
"TodoWrite",
"WebSearch",
"mcp__firecrawl__firecrawl_search",
"mcp__firecrawl__firecrawl_scrape",
"mcp__context7__resolve-library-id",
"mcp__context7__get-library-docs",
"mcp__fetch__fetch",
"mcp__puppeteer__puppeteer_screenshot",
"mcp__puppeteer__puppeteer_fill",
"mcp__puppeteer__puppeteer_evaluate",
"mcp__puppeteer__puppeteer_navigate",
"mcp__puppeteer__puppeteer_click",
"mcp__puppeteer__puppeteer_hover",
"mcp__ide__getDiagnostics",
"mcp__serena__activate_project",
"mcp__serena__check_onboarding_performed",
"mcp__serena__delete_memory",
"mcp__serena__find_file",
"mcp__serena__find_symbol",
"mcp__serena__get_symbols_overview",
"mcp__serena__insert_after_symbol",
"mcp__serena__insert_before_symbol",
"mcp__serena__list_dir",
"mcp__serena__list_memories",
"mcp__serena__onboarding",
"mcp__serena__read_memory",
"mcp__serena__replace_symbol_body",
"mcp__serena__search_for_pattern",
"mcp__serena__think_about_task_adherence",
"mcp__serena__think_about_whether_you_are_done",
"mcp__serena__write_memory",
"mcp__serena__find_referencing_symbols",
"mcp__serena__think_about_collected_information"
]
},
"model": "opus",
"hooks": {
"PreCompact": [
{
"matcher": "auto",
"hooks": [
{
"type": "command",
"command": "uv run ~/.claude/hooks/pre_compact.py"
}
]
}
]
},
"statusLine": {
"type": "command",
"command": "ccusage statusline"
},
"feedbackSurveyState": {
"lastShownTime": 1754113546065
}
}
{
"permissions": {
"allow": [
"Bash(pkill:*)",
"Bash(cloc:*)",
"Bash(timeout:*)",
"Bash(dos2unix:*)",
"Bash(file:*)",
"Bash(curl:*)",
"Bash(.venv:*)",
"Bash(source:*)",
"Bash(rg:*)",
"Bash(git:*)",
"Bash(npm:*)",
"Bash(npx:*)",
"Bash(yarn:*)",
"Bash(pnpm:*)",
"Bash(bun:*)",
"Bash(pip:*)",
"Bash(python:*)",
"Bash(python3:*)",
"Bash(node:*)",
"Bash(deno:*)",
"Bash(cargo:*)",
"Bash(rustc:*)",
"Bash(go:*)",
"Bash(mvn:*)",
"Bash(gradle:*)",
"Bash(make:*)",
"Bash(cmake:*)",
"Bash(jest:*)",
"Bash(vitest:*)",
"Bash(pytest:*)",
"Bash(mocha:*)",
"Bash(jasmine:*)",
"Bash(cypress:*)",
"Bash(playwright:*)",
"Bash(eslint:*)",
"Bash(prettier:*)",
"Bash(black:*)",
"Bash(flake8:*)",
"Bash(mypy:*)",
"Bash(tsc:*)",
"Bash(rustfmt:*)",
"Bash(gofmt:*)",
"Bash(ls:*)",
"Bash(cat:*)",
"Bash(head:*)",
"Bash(tail:*)",
"Bash(grep:*)",
"Bash(find:*)",
"Bash(which:*)",
"Bash(where:*)",
"Bash(pwd:*)",
"Bash(echo:*)",
"Bash(printf:*)",
"Bash(wc:*)",
"Bash(sort:*)",
"Bash(uniq:*)",
"Bash(awk:*)",
"Bash(sed:*)",
"Bash(cut:*)",
"Bash(tr:*)",
"Bash(mkdir:*)",
"Bash(touch:*)",
"Bash(cp:*)",
"Bash(mv:*)",
"Bash(chmod:*)",
"Bash(chown:*)",
"Bash(docker:*)",
"Bash(docker-compose:*)",
"Bash(kubectl:*)",
"Bash(helm:*)",
"Bash(nvm:*)",
"Bash(rbenv:*)",
"Bash(pyenv:*)",
"Bash(rustup:*)",
"Bash(gh:*)",
"Bash(hub:*)",
"Bash(glab:*)",
"Bash(heroku:*)",
"Bash(vercel:*)",
"Bash(netlify:*)",
"Bash(ps:*)",
"Bash(top:*)",
"Bash(htop:*)",
"Bash(df:*)",
"Bash(du:*)",
"Bash(free:*)",
"Bash(uname:*)",
"Bash(whoami:*)",
"Bash(id:*)",
"Bash(date:*)",
"Bash(uptime:*)",
"Bash(hg:*)",
"Bash(svn:*)",
"Bash(sqlite3:*)",
"Bash(psql:*)",
"Bash(mysql:*)",
"Bash(redis-cli:*)",
"Bash(mongo:*)",
"Read",
"Edit",
"Create",
"Write",
"Delete",
"Grep",
"Glob",
"LS",
"WebFetch(domain:github.com)",
"WebFetch(domain:docs.*)",
"WebFetch(domain:api.*)",
"WebFetch(domain:raw.githubusercontent.com)",
"WebFetch(domain:stackoverflow.com)",
"WebFetch(domain:developer.mozilla.org)",
"mcp__puppeteer__puppeteer_*",
"Bash(streamlit run:*)",
"mcp__puppeteer__puppeteer_navigate",
"mcp__puppeteer__puppeteer_screenshot",
"mcp__puppeteer__puppeteer_click",
"mcp__puppeteer__puppeteer_evaluate",
"mcp__context7__resolve-library-id",
"mcp__firecrawl__firecrawl_search",
"mcp__firecrawl__*",
"mcp__context7__get-library-docs",
"mcp__context7__*",
"mcp__firecrawl__firecrawl_scrape",
"mcp__context7__get-library-docs",
"mcp__context7__resolve-library-id",
"mcp__serena__read_file",
"mcp__serena__create_text_file",
"mcp__serena__list_dir",
"mcp__serena__find_file",
"mcp__serena__replace_regex",
"mcp__serena__delete_lines",
"mcp__serena__replace_lines",
"mcp__serena__insert_at_line",
"mcp__serena__search_for_pattern",
"mcp__serena__restart_language_server",
"mcp__serena__get_symbols_overview",
"mcp__serena__find_symbol",
"mcp__serena__find_referencing_symbols",
"mcp__serena__replace_symbol_body",
"mcp__serena__insert_after_symbol",
"mcp__serena__insert_before_symbol",
"mcp__serena__write_memory",
"mcp__serena__read_memory",
"mcp__serena__list_memories",
"mcp__serena__delete_memory",
"mcp__serena__execute_shell_command",
"mcp__serena__activate_project",
"mcp__serena__remove_project",
"mcp__serena__switch_modes",
"mcp__serena__get_current_config",
"mcp__serena__check_onboarding_performed",
"mcp__serena__onboarding",
"mcp__serena__think_about_collected_information",
"mcp__serena__think_about_task_adherence",
"mcp__serena__think_about_whether_you_are_done",
"mcp__serena__summarize_changes",
"mcp__serena__prepare_for_new_conversation",
"mcp__serena__initial_instructions",
"mcp__serena__jet_brains_find_symbol",
"mcp__serena__jet_brains_find_referencing_symbols",
"mcp__serena__jet_brains_get_symbols_overview"
],
"deny": []
},
"model": "sonnet"
}
{
"hooks": {
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "bash ~/.claude/scripts/context-aware-e2e-test.sh",
"timeout": 300
}
]
}
]
}
}
#!/bin/bash
# Save as ~/.claude/scripts/context-aware-e2e-test.sh
echo "🎯 Generating context-aware end-to-end test from session data..."
# Get hook input data
HOOK_DATA=$(cat)
TRANSCRIPT_PATH=$(echo "$HOOK_DATA" | jq -r '.transcript_path // empty')
SESSION_ID=$(echo "$HOOK_DATA" | jq -r '.session_id // "unknown"')
CONTEXT_PROMPT="I need to create a comprehensive end-to-end test for a feature that was just developed. Here's the context I have available:
## SESSION CONTEXT:"
# Add transcript analysis if available
if [ -n "$TRANSCRIPT_PATH" ] && [ -f "$TRANSCRIPT_PATH" ]; then
echo "πŸ“œ Including session transcript for context..."
CONTEXT_PROMPT="$CONTEXT_PROMPT
### Conversation Transcript:
Please analyze this transcript file to understand what was developed: $TRANSCRIPT_PATH"
fi
# Add git changes if available
if git rev-parse --git-dir > /dev/null 2>&1; then
echo "πŸ“ Including git changes for context..."
RECENT_CHANGES=$(git diff HEAD~1 2>/dev/null || git diff --cached 2>/dev/null || echo "No git changes")
CONTEXT_PROMPT="$CONTEXT_PROMPT
### Code Changes:
$RECENT_CHANGES"
fi
# Add file changes log if available
CHANGES_LOG="/tmp/claude_session_${SESSION_ID}_changes.json"
if [ -f "$CHANGES_LOG" ]; then
echo "πŸ—‚οΈ Including tracked file changes..."
CHANGES_DATA=$(cat "$CHANGES_LOG")
CONTEXT_PROMPT="$CONTEXT_PROMPT
### File Changes During Session:
$CHANGES_DATA"
fi
CONTEXT_PROMPT="$CONTEXT_PROMPT
## TASK:
Based on all the context above, create a comprehensive END-TO-END test client that:
1. **STARTS THE ACTUAL APPLICATION** - Boot up the real app/server/service
2. **TESTS COMPLETE USER JOURNEYS** - Not isolated functions, but full workflows
3. **VALIDATES THE NEW FEATURE** - Test what was actually built in this session
4. **SIMULATES REAL USAGE** - Test like a real user would interact with the feature
5. **INCLUDES ERROR SCENARIOS** - Test edge cases and failure modes
6. **PROVIDES CLEAR RESULTS** - Pass/fail with detailed logging
Create the appropriate test client (Python for APIs, bash for CLI, etc.) based on what type of application this is. Save as comprehensive_e2e_test.py or comprehensive_e2e_test.sh and execute it immediately.
The test should validate that the feature works correctly in the complete application context, not just as isolated code."
echo "πŸ€– Generating comprehensive E2E test with full context..."
claude -p "$CONTEXT_PROMPT"
# Execute the generated test
if [ -f "comprehensive_e2e_test.py" ]; then
echo "πŸ§ͺ Executing Python E2E test..."
python comprehensive_e2e_test.py
TEST_RESULT=$?
elif [ -f "comprehensive_e2e_test.sh" ]; then
echo "πŸ§ͺ Executing Bash E2E test..."
chmod +x comprehensive_e2e_test.sh
./comprehensive_e2e_test.sh
TEST_RESULT=$?
else
echo "⚠️ No test client was generated"
exit 1
fi
# Cleanup
rm -f "$CHANGES_LOG" 2>/dev/null
if [ $TEST_RESULT -eq 0 ]; then
echo "βœ… CONTEXT-AWARE E2E TEST PASSED"
echo "πŸŽ‰ Feature validated successfully with full session context!"
else
echo "❌ CONTEXT-AWARE E2E TEST FAILED"
echo "πŸ”₯ Issues found when testing feature with session context!"
exit 2
fi
Error in user YAML: (<unknown>): mapping values are not allowed in this context at line 2 column 306
---
name: tech-intelligence-researcher
description: Use this agent when you need comprehensive research on any technology, framework, library, or technical topic. This agent proactively searches multiple sources including Stack Overflow, GitHub, official documentation, and technical forums to gather the latest information and trends. Examples: <example>Context: User wants to research the latest developments in a specific technology. user: "I need to research the latest updates and best practices for React Server Components" assistant: "I'll use the tech-intelligence-researcher agent to conduct comprehensive research on React Server Components, gathering information from multiple sources and creating a detailed report." <commentary>The user is requesting technology research, so use the tech-intelligence-researcher agent to search multiple sources and compile findings.</commentary></example> <example>Context: User is exploring a new technology stack for a project. user: "Can you research the current state of WebAssembly performance optimizations and tooling?" assistant: "Let me launch the tech-intelligence-researcher agent to investigate WebAssembly performance optimizations and tooling across various sources." <commentary>This requires comprehensive technology research across multiple platforms, perfect for the tech-intelligence-researcher agent.</commentary></example>
---

You are a Technology Intelligence Researcher, an expert in conducting comprehensive technical research across multiple authoritative sources. Your mission is to gather, analyze, and synthesize the latest information about technologies, frameworks, libraries, and technical topics.

Your Research Methodology:

  1. Multi-Source Intelligence Gathering: Use available MCP tools (fetch, context7, etc.) to systematically search:

    • Official documentation and release notes
    • Stack Overflow discussions and solutions
    • GitHub repositories, issues, and discussions
    • Stack Exchange technical communities
    • Technical blogs and authoritative sources
    • Community forums and developer discussions
  2. Research Strategy: For each technology topic:

    • Start with official sources for authoritative information
    • Search Stack Overflow for common issues and solutions
    • Examine GitHub for latest developments, issues, and community feedback
    • Check Stack Exchange for in-depth technical discussions
    • Look for recent blog posts and articles from recognized experts
    • Identify trending topics and emerging patterns
  3. Information Analysis: As you gather data:

    • Prioritize recent information (last 6-12 months)
    • Cross-reference information across multiple sources
    • Identify consensus opinions vs. controversial topics
    • Note version-specific information and compatibility issues
    • Track performance benchmarks and real-world usage patterns
    • Document breaking changes and migration paths
  4. Quality Assurance: Ensure research quality by:

    • Verifying information against official sources
    • Noting the recency and reliability of sources
    • Distinguishing between stable features and experimental ones
    • Identifying potential biases or limitations in sources
  5. Comprehensive Documentation: Create a detailed markdown report in .dev-resources/research/ folder with:

    • Executive summary of key findings
    • Current state and recent developments
    • Best practices and recommended approaches
    • Common issues and solutions
    • Performance considerations and benchmarks
    • Community sentiment and adoption trends
    • Future roadmap and upcoming features
    • Properly cited sources with links
    • Structured sections for easy navigation

Research Report Structure:

  • Title and research date
  • Executive Summary
  • Current State Analysis
  • Recent Developments and Updates
  • Best Practices and Recommendations
  • Common Issues and Solutions
  • Performance and Benchmarks
  • Community Insights
  • Future Outlook
  • Detailed Source References

Research Standards:

  • Always verify information across multiple sources
  • Prioritize official documentation and authoritative sources
  • Include publication dates and version information
  • Note any conflicting information and provide context
  • Use clear, technical language appropriate for developers
  • Organize information logically with proper markdown formatting
  • Include code examples when relevant and properly attributed

You are thorough, analytical, and committed to providing accurate, up-to-date technical intelligence that enables informed decision-making.

1. Go to chatgpt and discuss on the requirement.
2. Once discussed, ask to create the prd in markdown
3. Copy prd in project directory
4. Ask claude code to /test-driven-development
read @prd.md
and create a 5 stage development plan with acceptance criteria described in a very detailed manner at the end of every stage
and persist the plan in a markdown file in the resources folder
and DO NOT START WITH WRITING CODE yet
5. In a new chat ask claude to start with stage 1 development using @development-plan.md
6. Once the changes are done ask for /review
7. In a new chat ask claude to start with stage 2 development using @development-plan.md
---
It needs improvement after adding a mandatory prd.
/frq - for new feature request inside an existing project
<requirement>
/compact
/develop @development-plan.md
/test-driven-development
/review
/persist-session
/compact or /clear
---
It needs improvement after adding a mandatory prd. For the time being simply use plan mode of claude code.
/ask - for small changes, bug fix, inside an existing project
<requirement>
/user:plan 3
Here are the answer to your above questions:
<answers>
<approve the plan>
/persist-session
/clear
/develop @development-plan.md @resources/prd/prd.md
/compact or /clear
---
===
This workflow is working as expected.
===
===
Create a prompt.md in resources/prompts folder after a detailed discussion with Gemini 2.5 pro or o3 inside windsurf.
/discussion resources/prompts/n8n-workflow.txt
===
===
/Plan 5 @resources/prd/prd.md
<approve the plan>
===
===
/persist-session
/clear
===
===
/init
/clear
===
===
/develop @resources/development_plan/development-plan.md @resources/prd/prd.md
===
===
/clear
===
===
/test @resources/prd/prd.md
===
------------------------
/init You should include the end goal of the application. Mention that it is not required to get to the end state at once. Rather we will build the whole application one task at a time. We maintain a resources/context/session-scratchpad.md to track the latest task completed, so that we can get a understanding of the state of the project even on a fresh session.
This project maintains a session scratchpad at resources/context/session-scratchpad.md to track the progress done till now and how the project has evolved overtime.
Read the instructions at /root/.claude/commands/persist-session.md to get an understanding on how to update the session scratchpad.
Always read and update the session scratchpad when working on this project to maintain context continuity across different Claude Code sessions.
End goal architecture state of the project is:
<Put the end goal of the application here>
-----------------------------
/brainstorm-dev [requirement]
Create a flow sequence diagram using mermaid
/create-arch-md
/clear
/plan [path of the architecture document]
/clear
/rearticulate
A high level of what we are trying to build. this is just to understand the big picture:
@.dev-resources/plan/multi-mcp-support/implementation-plan.md
We will develop it phase wise and here is the Scope of current session.
We have to develop the first phase. For development focus on this one only:
@.dev-resources/plan/multi-mcp-support/phases/phase-01-configuration-system.md
/test-driven-development
Start implementation
/snapshot
/clear
/rearticulate
This is the task you were working on:
@.dev-resources/context/plan/multi-mcp/phases/phase-02-mcp-orchestrator.md
And this is the handoff context from previous session:
.dev-resources/context/session-snapshot-phase2-mcp-orchestrator.md
/test-drive-development
Yes continue
/persist-session
/clear
-----------------------------
/ui-style-guide [requirement]
-----------------------------
/debug-python [Issue]
-----------------------------
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment