Skip to content

Instantly share code, notes, and snippets.

@kevnk
Last active January 26, 2026 21:59
Show Gist options
  • Select an option

  • Save kevnk/05439069805a8f710e5beca9c9809209 to your computer and use it in GitHub Desktop.

Select an option

Save kevnk/05439069805a8f710e5beca9c9809209 to your computer and use it in GitHub Desktop.
Convert CLAUDE.md/AGENTS.md, agents, and commands, (and even skills) into modular skills
description
Extract modules into skills AND amplify terse instructions via research into comprehensive guidance

Skillify

Extract reusable modules from instruction files (CLAUDE.md, AGENTS.md) or individual files (commands, agents, markdown) into modular skills. Additionally, identify terse instructions (e.g., "use Chrome DevTools MCP") and amplify them into comprehensive skills via web research. The original file is preserved but trimmed down with skill references, reducing context while expanding capabilities.

Arguments

TARGET: $ARGUMENTS

Accepts one of:

  • No arguments: Process CLAUDE.md and AGENTS.md in the current project root
  • File path(s): Convert specific files (commands, agents, or any markdown)
    • Example: .claude/commands/research-codebase.md
    • Example: .claude/agents/code-reviewer.md
    • Multiple files can be specified, space-separated

Core Principles

Keep originals, extract modules, reference skills.

The goal is NOT to delete files or create 1:1 skill replacements. Instead:

  1. Identify self-contained modules within a file that add context bloat
  2. Extract those modules into separate project-local skill files
  3. Replace the detailed content with skill references
  4. Result: Original file is leaner, detailed guidance is preserved in focused skills

Amplify terse instructions into comprehensive skills.

When a file contains brief references to external tools, MCP servers, APIs, or concepts:

  1. Identify these "amplification candidates" - instructions that could benefit from more detail
  2. Research the topic using general-purpose agent with WebSearch/WebFetch
  3. Create comprehensive skills that go BEYOND the original instruction
  4. Result: Brief mentions become powerful, detailed guidance

Edge Case: Skillifying a Skill (SKILL.md)

When the target is itself a SKILL.md file, the workflow changes. Instead of extracting into separate skills, decompose the bloated SKILL.md into supporting files within the same directory.

See skill: skill-decomposition-guide for the complete decomposition workflow, including:

  • Progressive Disclosure pattern and target structure
  • 6-step decomposition process with plan templates
  • What stays in SKILL.md vs. moves to references/examples
  • Common decomposition patterns and validation checklist

Key principle: Skills should be lean (~1,500-2,000 words) with detailed content in references/, examples/, and scripts/ subdirectories.


Standard Workflow (Commands, Agents, CLAUDE.md)

For non-skill files, follow the standard extraction and amplification workflow below.

Step 1: Read and Analyze

  1. Read the target file(s) fully using Read tool (no limit/offset)

  2. Identify extractable modules - look for:

    • Self-contained reference material (templates, patterns, schemas)
    • Detailed examples or documentation blocks
    • Verbose sections that clutter the main file
    • Content that would benefit from being in a dedicated, focused file

    Note: For CLAUDE.md/AGENTS.md, skills are project-specific (stored in ./.claude/skills/). Don't worry about cross-project reusability—focus on reducing context size and improving organization within this project.

  3. Identify amplification candidates - terse instructions that reference:

    • MCP servers (e.g., "use Chrome DevTools MCP", "use playwright")
    • External tools or CLIs (e.g., "use gh CLI", "run shellcheck")
    • APIs or services (e.g., "call the Slack API", "use OpenAI")
    • Frameworks or libraries (e.g., "use Tailwind", "follow 12-factor")
    • Concepts that deserve deeper guidance (e.g., "handle errors properly")
  4. Identify what should stay inline:

    • Core workflow steps specific to this command
    • Decision logic unique to this use case
    • Brief summaries and orchestration

See skill: skillify-decision-guide for detailed extraction/amplification criteria.

Step 2: Plan Extraction & Amplification

For each extractable module:

EXTRACTION: "[Section/Block Name]"
  Skill Name: [kebab-case-name]
  Lines in Original: [approximate]
  Context Reduction: [how much leaner does original become?]
  Description: [trigger-focused description for skill frontmatter]

For each amplification candidate:

AMPLIFICATION: "[Terse Instruction]"
  Skill Name: [kebab-case-name]
  Current Detail: [1-2 lines / brief mention / etc.]
  Research Topics: [what to look up - docs, best practices, examples]
  Value Add: [what comprehensive skill would provide]
  Description: [trigger-focused description for skill frontmatter]

Present the plan to the user before proceeding.

Step 3: Research Amplification Candidates

For each amplification candidate, spawn a general-purpose agent to research:

See skill: task-agent-patterns for agent spawning patterns and prompt best practices.

Task tool:
  subagent_type: general-purpose
  prompt: "Research [topic] for creating a comprehensive skill. Use WebSearch
    to find documentation and guides, then WebFetch to read key pages. Include:
    - Official documentation and getting started guides
    - Key concepts and terminology
    - Common use cases and examples
    - Best practices and gotchas
    - Configuration options and parameters
    - Error handling patterns
    - Integration patterns with Claude Code / AI assistants

    Return a comprehensive summary suitable for creating a skill file."

Research depth guidance:

  • MCP servers: Focus on available tools, when to use each, common patterns
  • CLI tools: Focus on flags, common workflows, output parsing
  • APIs: Focus on authentication, key endpoints, rate limits, error handling
  • Frameworks: Focus on core concepts, project structure, conventions

Compile research findings before creating skills.

Step 4: Create Skills

For each module (extraction) and research result (amplification), create a skill.

See skill: skill-structure-guide for complete SKILL.md format and best practices.

Directory structure (project-local):

./.claude/skills/[skill-name]/
└── SKILL.md

Skills are created in the current project's .claude/skills/ directory, making them project-specific. This is intentional—focus on reducing context size rather than cross-project reusability.

Guidelines for extracted skills:

  • Keep under 500 lines
  • Use imperative form
  • Include examples where helpful
  • Can assume project context (these are project-specific skills)

Guidelines for amplified skills:

  • Go BEYOND the original instruction - add real value
  • Include practical examples specific to Claude Code context
  • Document tool parameters, options, and common patterns
  • Add troubleshooting guidance and common pitfalls
  • Include "when to use" and "when NOT to use" guidance
  • Keep actionable - not just reference docs, but how to actually use it

Step 5: Update Original File

Replace extracted sections with skill references:

Before (verbose):

## Document Template

### File Naming Convention
Location: `thoughts/shared/research/`
Format: `YYYY-MM-DD-description.md`
...
[45 more lines of template details]

After (lean):

## Document Template

**See skill: `research-document-template`** for structure and metadata.

Write to `thoughts/shared/research/YYYY-MM-DD-description.md`

Guidelines:

  • Keep 1-2 sentence summary of what the skill provides
  • Include the skill reference clearly
  • Keep any context-specific details inline

Step 6: Summary Report

See skill: skillify-examples for report format and additional examples.

Generate a summary showing:

  • Extraction results (before/after line counts, reduction %)
  • Skills created (extracted and amplified)
  • Skill references added
  • Next steps for verification

Skill Naming

  • Use kebab-case: explore-agent-patterns
  • Be descriptive: research-document-template not template
  • Indicate scope: bash-agent-patterns not bash
name description
skill-decomposition-guide
Guide for decomposing large SKILL.md files into lean core + supporting files. Use this skill when refactoring bloated skills, splitting skill content into references/examples directories, or applying Progressive Disclosure pattern. Triggers: "decompose skill", "split SKILL.md", "skill too large", "progressive disclosure", "lean skill", "skill references directory", "skillify a skill", "break up skill file".

Skill Decomposition Guide

Decompose bloated SKILL.md files into lean core instructions + supporting files, following the Progressive Disclosure pattern.

When to Use

  • SKILL.md exceeds ~3,000 words or ~500 lines
  • Skill loads unnecessary detail into context
  • Want to enable on-demand loading of reference material
  • Refactoring existing skills for better organization

Progressive Disclosure Principle

Skills use a three-level loading system:

  1. Metadata (always in context): name + description (~100 words)
  2. SKILL.md body (when skill triggers): Core workflow (<2,000 words)
  3. Supporting files (loaded as needed): Unlimited depth

A bloated SKILL.md wastes context. Breaking it into supporting files lets the AI agent load detail on-demand.

Target Directory Structure

skill-name/
├── SKILL.md              # Lean: core workflow, pointers (~1,500-2,000 words)
├── references/           # Detailed docs, patterns, advanced techniques
│   ├── patterns.md
│   ├── advanced.md
│   └── api-reference.md
├── examples/             # Working code, templates, configurations
│   ├── basic-example.md
│   └── advanced-example.md
└── scripts/              # Executable utilities (if applicable)
    └── validate.sh

Decomposition Workflow

Step 1: Analyze the SKILL.md

Read the skill and categorize content:

Category Destination Content Type
Core SKILL.md Purpose, when to use, essential workflow, resource pointers
Reference references/ Detailed patterns, advanced techniques, API docs, troubleshooting
Examples examples/ Code samples, templates, working configurations
Scripts scripts/ Validation tools, utilities

Step 2: Plan the Decomposition

SKILL DECOMPOSITION: "[skill-name]"
  Current Lines: [X]
  Target SKILL.md: ~[Y] lines (1,500-2,000 words)

  KEEP IN SKILL.md:
  - [Section name] (essential workflow)
  - [Section name] (core concepts)

  MOVE TO references/:
  - "[Section]" → references/patterns.md (~Z lines)
  - "[Section]" → references/advanced.md (~Z lines)

  MOVE TO examples/:
  - "[Code block]" → examples/basic-example.md
  - "[Template]" → examples/template.md

Present plan before proceeding.

Step 3: Create Supporting Files

mkdir -p skill-name/references skill-name/examples

For each extracted section:

  • Write standalone file with clear heading
  • Include enough context to be useful without SKILL.md
  • Keep logical groupings (all patterns together, all examples together)

Step 4: Update SKILL.md

Replace extracted sections with pointers.

Before (bloated):

## Advanced Patterns

### Pattern 1: Complex Workflow
[200 lines of detailed pattern explanation]

### Pattern 2: Error Handling
[150 lines of error handling patterns]

After (lean):

## Advanced Patterns

For detailed patterns, see `references/patterns.md`:
- Complex Workflow pattern
- Error Handling pattern
- Integration patterns

Use these when implementing production features.

Step 5: Add Resources Section

Ensure SKILL.md references all supporting files:

## Additional Resources

### Reference Files
- **`references/patterns.md`** - Detailed implementation patterns
- **`references/advanced.md`** - Advanced techniques and edge cases

### Examples
- **`examples/basic-example.md`** - Getting started example
- **`examples/advanced-example.md`** - Production-ready example

Step 6: Summary Report

SKILL DECOMPOSITION COMPLETE: [skill-name]

Before: [X] lines in SKILL.md
After:  [Y] lines in SKILL.md ([Z]% reduction)

Files Created:
- references/patterns.md ([N] lines)
- references/advanced.md ([N] lines)
- examples/basic-example.md ([N] lines)

Progressive Disclosure levels:
- Level 1: Metadata (always loaded)
- Level 2: SKILL.md body (on trigger)
- Level 3: references/examples (on demand)

What Stays in SKILL.md

  • Purpose and "when to use" guidance
  • Essential workflow steps (high-level)
  • Quick reference tables
  • Pointers to supporting files
  • Most common use cases

Target: 1,500-2,000 words

What Moves to references/

  • Detailed patterns and advanced techniques
  • Comprehensive API documentation
  • Migration guides
  • Edge cases and troubleshooting
  • Extensive explanations

Each file: 500-2,000+ words

What Moves to examples/

  • Complete, runnable code
  • Configuration files
  • Template files
  • Real-world usage examples

Users copy and adapt these directly

Common Decomposition Patterns

Pattern: Single Large Section

One section dominates the skill.

SKILL.md (800 lines)
└── "API Reference" section (600 lines)

→ Move to references/api-reference.md
→ Keep 5-line summary + pointer in SKILL.md

Pattern: Multiple Detailed Sections

Several sections each have significant detail.

SKILL.md (1200 lines)
├── "Patterns" (300 lines)
├── "Advanced" (250 lines)
└── "Troubleshooting" (200 lines)

→ references/patterns.md
→ references/advanced.md
→ references/troubleshooting.md
→ SKILL.md keeps summaries + pointers (~450 lines)

Pattern: Code-Heavy Skill

Many inline code examples.

SKILL.md (900 lines)
├── Inline examples throughout (400 lines of code)

→ examples/basic.md (simple examples)
→ examples/advanced.md (complex examples)
→ SKILL.md references examples, keeps 1-2 minimal inline

Validation Checklist

After decomposition, verify:

  • SKILL.md under 2,000 words
  • All referenced files exist
  • Each supporting file is standalone (useful without SKILL.md)
  • "Additional Resources" section lists all files
  • No duplicate content between SKILL.md and references
  • Logical grouping in each reference file
name description
skill-structure-guide
Complete reference for creating SKILL.md files with proper structure and frontmatter. Use this skill when creating new skills, when understanding skill file format, or when debugging why a skill isn't triggering. Triggers: "create skill", "SKILL.md format", "skill frontmatter", "skill structure", "skill not triggering", "skill directory", "skill template".

Skill Structure Guide

Complete reference for creating and structuring Claude Code skills.

Directory Structure

Skill Location

.claude/skills/[skill-name]/
└── SKILL.md              # Required - main instructions
└── scripts/              # Optional - executable code
└── references/           # Optional - documentation for context
└── assets/               # Optional - files for output

Priority Order

  1. Enterprise (managed settings)
  2. Personal (~/.claude/skills/)
  3. Project (.claude/skills/)
  4. Plugin (<plugin>/skills/)

SKILL.md Template

---
name: skill-name
description: |
  What this skill does and when to use it. Include specific trigger
  phrases users would naturally say. Be comprehensive - this is the
  ONLY thing Claude sees to decide whether to load the skill.
  Triggers: "phrase 1", "phrase 2", "phrase 3".
---

# Skill Title

[Main instructions Claude follows when skill is active]

## Quick Start
[Essential usage steps]

## Guidelines
- Guideline 1
- Guideline 2

## Examples
[Concrete examples showing expected behavior]

## Additional Resources
- For advanced features, see [reference.md](reference.md)

Frontmatter Reference

Required/Recommended

Field Required Description
name Recommended Display name (uses directory name if omitted). Lowercase, hyphens, max 64 chars.
description Critical What skill does + when to use it. Include trigger phrases.

Optional Fields

Field Default Description
argument-hint none Hint for autocomplete (e.g., [issue-number])
disable-model-invocation false Prevent auto-loading, manual /skill only
user-invocable true Show in / menu
allowed-tools all Tools usable without permission prompts
model inherited Model to use (sonnet/opus/haiku)
context main Set to fork for subagent execution
agent none Subagent type when context: fork

Description Best Practices

The description is critical - it's the only information Claude sees to decide whether to load the skill.

Good Descriptions

# Good: Comprehensive with triggers
description: |
  Patterns for spawning and using agents via the Task tool. Use when
  spawning Explore, Plan, or general-purpose agents, when writing
  agent prompts, or when using parallel/background agents.
  Triggers: "spawn agent", "Task tool", "subagent patterns".

# Good: Action-oriented with context
description: Create production-grade frontend interfaces. Use when
  building web components, pages, or applications.

Bad Descriptions

# Bad: Too vague
description: Helps with agents

# Bad: No trigger context
description: A tool for working with subagents

Description Formula

[What it does] + [When to use it] + [Trigger phrases]

Invocation Control

Settings You Can Invoke Claude Can Invoke
(default) Yes (/skill) Yes (auto)
disable-model-invocation: true Yes No
user-invocable: false No Yes

String Substitutions

Variable Description
$ARGUMENTS All arguments passed to skill
$ARGUMENTS[N] Specific argument by index
$0, $1, etc. Shorthand for arguments
!`command` Shell command output (preprocessed)

Example with Arguments

---
name: fix-issue
description: Fix a GitHub issue by number
disable-model-invocation: true
---

Fix GitHub issue #$ARGUMENTS following our coding standards.

Usage: /fix-issue 123 → "Fix GitHub issue #123..."

Naming Conventions

Rules

  • Lowercase letters, numbers, hyphens only
  • Maximum 64 characters
  • Hyphens separate words

Good Names

  • task-agent-patterns
  • skill-structure-guide
  • frontend-design
  • pr-summary

Bad Names

  • TaskAgentPatterns (no camelCase)
  • skill_structure (no underscores)
  • guide (too generic)

Supporting Files

scripts/

  • Executable code (Python, Bash, etc.)
  • Token efficient, deterministic
  • Claude executes without loading into context

references/

  • Documentation loaded as needed
  • API specs, schemas, policies
  • For large files (>10k words), include grep patterns in SKILL.md

assets/

  • Files for output, not loaded into context
  • Templates, images, boilerplate

Skills vs Commands vs Agents

Aspect Skills Commands Agents
Location .claude/skills/ .claude/commands/ .claude/agents/
Invocation Auto or /skill Manual /cmd only Via Task tool
Context Main conversation Main conversation Isolated
Purpose Knowledge/guidance Execute steps Complex parallel work

Note: Commands and skills with same name share /name invocation.

Troubleshooting

Skill Not Triggering

  1. Check description includes natural trigger phrases
  2. Verify skill shows in What skills are available?
  3. Try rephrasing to match description keywords
  4. Invoke directly with /skill-name

Skill Triggers Too Often

  1. Make description more specific
  2. Add disable-model-invocation: true

Skill Not Visible

  • May exceed character budget (default 15,000)
  • Run /context to check for warnings
  • Increase SLASH_COMMAND_TOOL_CHAR_BUDGET

Content Guidelines

Do Include

  • Clear instructions Claude can follow
  • Concrete examples of expected behavior
  • Decision criteria and edge cases
  • References to supporting files

Don't Include

  • README.md or installation guides
  • Changelog or version history
  • Redundant explanations (Claude is smart)
  • Excessive documentation (keep <500 lines)

Quick Checklist

  • Directory: .claude/skills/[name]/SKILL.md
  • Name: kebab-case, descriptive, <64 chars
  • Description: What + When + Triggers
  • Content: <500 lines, actionable
  • Examples: Concrete, not abstract
  • References: Linked from SKILL.md body
name description
skillify-decision-guide
Decision criteria for when to extract content into skills vs keep inline, and when to amplify terse instructions via research. Use this skill when running /skillify, when deciding what makes a good extraction candidate, or when evaluating amplification opportunities. Triggers: "should I extract", "extraction criteria", "amplification criteria", "what to keep inline", "skillify decision", "good extraction candidate".

Skillify Decision Guide

Criteria for extraction, amplification, and inline retention decisions.

Extraction Criteria

Extract When:

  • Content is self-contained and reusable - doesn't depend on surrounding context
  • Multiple commands/agents could benefit from the same guidance
  • Section is >30 lines of detailed guidance
  • Content is reference material (templates, patterns, tables, schemas)
  • Content represents established best practices worth standardizing

Keep Inline When:

  • Content is <20 lines - extraction overhead not worth it
  • Content is specific to this one file's purpose - not generalizable
  • Content is orchestration/workflow logic - the "how" of this particular command
  • Content involves context-dependent decisions - requires knowledge of the surrounding flow
  • Content is a brief summary that just introduces a concept

Extraction Sizing Guide

Lines Recommendation
<20 Keep inline
20-30 Consider if highly reusable
30-50 Good extraction candidate
50-100 Strong extraction candidate
>100 Definitely extract, may need to split

Amplification Criteria

Amplify When:

  • Instruction references external tool/service without usage details
  • "Use X" assumes knowledge the reader may not have
  • MCP server mentioned without explaining available tools
  • API/CLI referenced without common patterns or examples
  • Concept mentioned that has established best practices worth documenting
  • Research would yield >50 lines of actionable guidance

Skip Amplification When:

  • The tool/concept is trivially simple (e.g., "use cat to read files")
  • Comprehensive skill already exists for this topic
  • The instruction is intentionally brief - user should look it up themselves
  • Research would yield minimal additional value (<30 lines)
  • The topic is too niche - only relevant to this one use case

Amplification Value Indicators

Indicator Amplify? Reason
"Use X MCP" Yes MCP servers have many tools to document
"Run shellcheck" Maybe Has flags, but fairly straightforward
"Use Git" No Too broad, many existing resources
"Follow 12-factor" Yes Complex methodology worth summarizing
"Call the API" Yes Auth, endpoints, errors need documenting

Amplification Value-Add Checklist

Good amplification adds value by:

  • Turning "use X" into "here's how to use X effectively"
  • Documenting tool options the original author may not have known
  • Adding troubleshooting guidance from real-world experience
  • Providing decision frameworks (when to use tool A vs B)
  • Including integration patterns specific to Claude Code workflows
  • Adding "when NOT to use" guidance to prevent misuse

Quick Decision Flowchart

Is content >30 lines?
├─ No → Keep inline (unless highly reusable)
└─ Yes → Is it self-contained?
         ├─ No → Keep inline (context-dependent)
         └─ Yes → Is it reusable elsewhere?
                  ├─ No → Keep inline (too specific)
                  └─ Yes → EXTRACT

Is instruction terse (<5 lines)?
├─ No → Not an amplification candidate
└─ Yes → Does it reference external tool/service?
         ├─ No → Not an amplification candidate
         └─ Yes → Would research add >50 lines of value?
                  ├─ No → Skip amplification
                  └─ Yes → AMPLIFY

Examples by Content Type

Content Type Typical Decision Reason
Templates Extract Highly reusable, self-contained
Workflow steps Keep inline Specific to this command
Tool reference tables Extract Reference material, reusable
Decision logic Keep inline Context-dependent
Best practices (>30 lines) Extract Reusable guidance
MCP server mentions Amplify Need tool documentation
API references Amplify Need auth/endpoint details
Brief tips (<10 lines) Keep inline Too small for skill

Naming Extracted/Amplified Skills

  • Use kebab-case: explore-agent-patterns
  • Be descriptive: research-document-template not template
  • Indicate scope: bash-agent-patterns not bash
  • Include domain context: skillify-decision-guide not decision-guide
name description
skillify-examples
Examples of skillify extraction and amplification patterns. Use this skill when learning how to use /skillify, when understanding extraction vs amplification, or when needing reference examples for modularizing files into skills. Triggers: "skillify example", "extraction example", "amplification example", "how does skillify work", "skill extraction patterns".

Skillify Examples

Reference examples showing extraction and amplification patterns for the /skillify command.

Example 1: Command Modularization

Input: research-codebase.md (182 lines)

Identified Modules:

  1. Explore agent patterns (thoroughness levels, capabilities, prompts) - 50 lines
  2. Research document template (YAML structure, sections) - 45 lines

Workflow steps kept inline:

  • Initial setup prompt
  • Decomposition instructions
  • Spawn/synthesize/present steps

Result:

  • Created: explore-agent-patterns skill (74 lines)
  • Created: research-document-template skill (75 lines)
  • Updated: research-codebase.md (182 → 87 lines, 52% reduction)
  • Command now references both skills at relevant steps

Example 2: CLAUDE.md Sections

Input: CLAUDE.md with these sections:

  1. Git commit preferences (2 lines) - too small
  2. MCP Server Development (60 lines) - extractable
  3. Agent Development (120 lines) - extractable
  4. Bash Scripting (80 lines) - extractable

Result:

  • Keep inline: Git preferences (too small for skill)
  • Created: mcp-server-dev skill
  • Created: agent-development skill
  • Created: bash-scripting skill
  • CLAUDE.md now has brief summaries with skill references

Example 3: Amplification (Terse → Comprehensive)

Input: Agent file with instruction:

Use Chrome DevTools MCP to inspect the page and verify changes.

Identified as amplification candidate:

  • Terse instruction (1 line)
  • References MCP server without explaining available tools
  • No guidance on when/how to use specific tools

Research conducted:

  • Chrome DevTools MCP documentation
  • Available tools: take_snapshot, take_screenshot, click, fill, navigate, etc.
  • Best practices for UI verification workflows
  • Common patterns for form testing, visual regression, etc.

Result - Created chrome-devtools-mcp skill (150 lines):

  • Complete list of available tools with descriptions
  • Decision guide: when to use snapshot vs screenshot
  • Common workflows: page navigation, form filling, element inspection
  • Examples for verification patterns
  • Troubleshooting common issues

Original file updated:

Use Chrome DevTools MCP to inspect the page and verify changes.
**See skill: `chrome-devtools-mcp`** for available tools and verification patterns.

Value added: 1 line instruction → 150 lines of actionable guidance

Before/After Comparison

Before (verbose in original file):

## Document Template

### File Naming Convention
Location: `thoughts/shared/research/`
Format: `YYYY-MM-DD-description.md`
...
[45 more lines of template details]

After (lean with skill reference):

## Document Template

**See skill: `research-document-template`** for structure and metadata.

Write to `thoughts/shared/research/YYYY-MM-DD-description.md`

Summary Report Format

After skillify completes, generate this report:

## Skillify Complete

### Extraction Summary
| Original File | Before | After | Reduction |
|---------------|--------|-------|-----------|
| [filename] | [X] lines | [Y] lines | [Z]% |

### Skills Extracted (from existing content)
| Skill Name | Lines | Purpose |
|------------|-------|---------|
| [name] | [N] | [brief purpose] |

### Skills Amplified (researched & expanded)
| Skill Name | Lines | Original Instruction | Value Added |
|------------|-------|---------------------|-------------|
| [name] | [N] | "[terse original]" | [what was added] |

### Skill References Added
- Step N: References `skill-name` for [purpose]

### Next Steps
1. Test the command to verify skill references work
2. Skills are now reusable by other commands/agents
3. Review amplified skills for accuracy (based on web research)
name description
task-agent-patterns
Patterns for spawning and using agents via the Task tool in Claude Code. Use this skill when spawning Explore, Plan, general-purpose, or custom agents, when needing to understand agent types and capabilities, when writing effective agent prompts, or when using background/parallel agents. Triggers: "spawn agent", "Task tool", "subagent", "general-purpose agent", "Explore agent", "parallel agents", "background agent", "resume agent", "agent patterns".

Task Agent Patterns

Comprehensive guide to spawning and using agents via the Task tool.

Overview

The Task tool launches specialized agents (subprocesses) to handle complex tasks autonomously. Each agent operates in an isolated context window, enabling parallel execution and specialized workflows.

Available Agent Types

Agent Type Model Tools Best For
Explore Haiku Read-only Fast codebase search, file discovery
Plan Inherited Read-only Planning mode research
general-purpose Inherited All tools Complex multi-step tasks
Bash Inherited Bash Terminal commands in isolation
Custom agents Configurable Configurable Specialized workflows

When to Use Each Agent

Use Explore When:

  • Searching codebase without making changes
  • Quick file lookups and pattern matching
  • Answering "where is X?" questions
  • Parallel research across multiple areas

Thoroughness levels:

  • quick - Targeted lookups, fast responses
  • medium - Balanced exploration
  • very thorough - Comprehensive multi-location analysis

Use general-purpose When:

  • Task requires both reading AND writing
  • Complex reasoning with multiple steps
  • Need full tool access (WebSearch, Edit, etc.)
  • Research that leads to modifications

Use Plan When:

  • In plan mode gathering context
  • Need read-only research for planning

Task Tool Parameters

Parameter Required Description
subagent_type Yes Agent type to use
prompt Yes Instructions for the agent
description Yes 3-5 word summary
run_in_background No Run asynchronously (default: false)
resume No Agent ID to continue previous work
model No Override model (sonnet/opus/haiku)

Prompt Writing Best Practices

1. Single Clear Goal

prompt: "Find all files that handle user authentication and list their paths with a brief description of each"

2. Include Completion Criteria

prompt: "Research the payment module. Return:
  - File paths involved
  - Key functions and their purposes
  - External dependencies
  - Potential security concerns"

3. Specify Output Format

prompt: "Analyze error handling patterns. Return a markdown table with:
  | File | Pattern Used | Recommendation |"

4. Define Scope Boundaries

prompt: "Focus only on the src/api/ directory. Do not explore frontend code."

Parallel Agent Spawning

Capacity

  • Up to 7 agents can run simultaneously
  • Tasks queue if limit exceeded

Pattern: Parallel Research

# Send multiple Task calls in single message:

Task(subagent_type: "Explore", prompt: "Analyze frontend architecture in src/components/")
Task(subagent_type: "Explore", prompt: "Analyze backend services in src/api/")
Task(subagent_type: "Explore", prompt: "Analyze database layer in src/models/")

Best Practices for Parallel

  • Each agent needs focused, non-overlapping scope
  • Works best for read/research tasks
  • Avoid parallel writes to same files
  • Request synthesis of findings after completion

Background Agents

How to Use

Task(
  subagent_type: "general-purpose",
  prompt: "Comprehensive security audit of authentication system",
  run_in_background: true
)

Limitations

  • Inherit parent's permissions only
  • MCP tools NOT available
  • Cannot ask clarifying questions
  • Permission failures fail silently

Monitoring

  • Use /tasks to view running agents
  • Read output file to check progress
  • Resume in foreground if permissions needed

Resuming Agents

When to Resume

  • Continue previous work with additional instructions
  • Add context after initial exploration
  • Handle permission failures from background runs

Pattern

# Initial task returns agent_id: "agent_abc123"

# Resume with additional instructions:
Task(
  resume: "agent_abc123",
  prompt: "Now also check for SQL injection vulnerabilities in the queries you found"
)

Session Persistence

  • Transcripts stored in ~/.claude/projects/{project}/{sessionId}/subagents/
  • Can resume after restarting Claude Code
  • Auto-cleanup after 30 days (configurable)

Common Patterns

Pattern 1: Research Then Implement

# Step 1: Research with Explore
Task(subagent_type: "Explore", prompt: "Find all places where user roles are checked")

# Step 2: Implement with general-purpose
Task(subagent_type: "general-purpose", prompt: "Add admin role check to the endpoints found above")

Pattern 2: Parallel File Analysis

# Spawn in parallel for independent analysis
Task(subagent_type: "Explore", prompt: "Analyze authentication module")
Task(subagent_type: "Explore", prompt: "Analyze authorization module")
Task(subagent_type: "Explore", prompt: "Analyze session management")

Pattern 3: Background Long-Running Task

Task(
  subagent_type: "general-purpose",
  prompt: "Run full test suite and report failures",
  run_in_background: true
)
# Continue working while tests run

Pattern 4: Chained Agents

# Use one agent's output as input to next
Task(subagent_type: "Explore", prompt: "Find performance bottlenecks")
# After completion:
Task(subagent_type: "general-purpose", prompt: "Optimize the bottlenecks found: [list from previous]")

When NOT to Use Agents

Use Main Conversation Instead When:

  • Task needs frequent back-and-forth
  • Quick, targeted changes (1-2 files)
  • Multiple phases share significant context
  • Latency matters (agents start fresh)

Troubleshooting

Issue Solution
Agent can't find files Use absolute paths or clear directory references
Background agent fails silently Resume in foreground to see errors
Agent gives incomplete results Make prompt more specific with explicit deliverables
Agent takes too long Use Explore with quick thoroughness
Parallel agents overlap Define non-overlapping scopes clearly
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment