Skip to content

Instantly share code, notes, and snippets.

@lawrencejones
Last active January 30, 2026 14:04
Show Gist options
  • Select an option

  • Save lawrencejones/32a91544e6e8f1fbccaf93721f3bb06b to your computer and use it in GitHub Desktop.

Select an option

Save lawrencejones/32a91544e6e8f1fbccaf93721f3bb06b to your computer and use it in GitHub Desktop.

Claude meta-analysis

Get Claude to tell you about how you use it. Just provide this prompt and check your usage report:

I want to analyze my ~/.claude directory to understand how I use Claude Code.                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                                                                                                                                               
First, explore the directory structure to understand what data is available:                                                                                                                                                                                                                                                                                                                                   
- ~/.claude/history.jsonl (session index)                                                                                                                                                                                                                                                                                                                                                                      
- ~/.claude/projects/ (full conversation logs by project)                                                                                                                                                                                                                                                                                                                                                      
- ~/.claude/todos/ (task tracking)                                                                                                                                                                                                                                                                                                                                                                             
- ~/.claude/stats-cache.json (usage statistics)                                                                                                                                                                                                                                                                                                                                                                
- ~/.claude/settings.json (configuration)                                                                                                                                                                                                                                                                                                                                                                      
- ~/.claude/commands/ and ~/.claude/skills/ (customizations)                                                                                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                                                                                                                                                                                                                                               
Then launch parallel sub-agents to analyze different aspects:                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                                                                                                                                               
1. **Session patterns agent**: Parse history.jsonl to extract session counts by                                                                                                                                                                                                                                                                                                                                
   day/hour, identify most-used projects, calculate session frequency over time                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                                                                                                                                               
2. **Conversation analysis agent**: Sample JSONL files from projects/ to analyze:                                                                                                                                                                                                                                                                                                                              
   - Common tool sequences/workflows                                                                                                                                                                                                                                                                                                                                                                           
   - Average conversation length                                                                                                                                                                                                                                                                                                                                                                               
   - Types of tasks (look for keywords: fix, add, refactor, test, etc.)                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                                                                                                                                                                                                                                                                               
3. **Configuration agent**: Review settings.json, commands/, skills/, and any                                                                                                                                                                                                                                                                                                                                  
   CLAUDE.md files to document customizations and preferences                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                                                                                                                                               
4. **Example prompts agent**: Extract interesting user prompts from conversations                                                                                                                                                                                                                                                                                                                              
   that showcase usage patterns - categorize by type (debugging, architecture,                                                                                                                                                                                                                                                                                                                                 
   orchestration, etc.)                                                                                                                                                                                                                                                                                                                                                                                        
                                                                                                                                                                                                                                                                                                                                                                                                               
Compile findings into a markdown report with:                                                                                                                                                                                                                                                                                                                                                                  
- Overview metrics (sessions, messages, date range, primary projects)                                                                                                                                                                                                                                                                                                                                          
- Time patterns (when do I work, peak hours, day of week distribution)                                                                                                                                                                                                                                                                                                                                         
- Tool usage breakdown with percentages                                                                                                                                                                                                                                                                                                                                                                        
- Common workflows (tool sequences)                                                                                                                                                                                                                                                                                                                                                                            
- Task types based on keyword analysis                                                                                                                                                                                                                                                                                                                                                                         
- Example prompts organized by category                                                                                                                                                                                                                                                                                                                                                                        
- Key insights (3-5 bullet summary of distinctive patterns)                                                                                                                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                                                                                                               
Save the report to ~/.claude/usage-report.md

Claude Code Usage Analysis Report

Generated: January 21, 2026


When You Work

Time of Day

  • Peak hours: 14:00-17:00 (afternoon coding block)
  • Secondary peak: 20:00-21:00 (evening work)
  • 59% of sessions during business hours (9-18)
  • 26% evening work (18-22)
  • 8% early morning (6-9)
  • 7% late night (22-6)

Day of Week

Day Sessions Avg/Day
Wednesday 1,262 78.9
Tuesday 1,162 72.6
Monday 1,062 66.4
Thursday 812 50.8
Friday 663 41.4
Sunday 268 16.8
Saturday 212 13.2

Key insight: Mon-Wed accounts for 64% of weekly activity. Weekends at ~15% of weekday levels.


What You Work On

Task Types (from Todo Analysis)

Based on 1,361 tracked todo items:

Category Tasks Percentage
AI/LLM features 430 31.6%
Bug fixes 392 28.8%
Refactoring 364 26.7%
Testing/verification 348 25.6%
Creation tasks 283 20.8%
Analysis 223 16.4%

Configuration

Model Preference

Default model: Claude Opus 4.5 (opus)

Transitioned from Sonnet to Opus in late November 2025, now almost exclusively Opus.

Enabled Plugins

  1. aisre@incident-io-local - Custom incident investigation
  2. typescript-lsp@claude-plugins-official - TypeScript language server
  3. gopls-lsp@claude-plugins-official - Go language server

Custom Features

  • /speak command with TTS integration (Claude-to-Speech)
  • AI SRE skill for incident investigation
  • Custom status line: bun x ccusage statusline
  • Co-authored-by disabled in git commits

Personal Style Preferences

  • Go comments: 90-char wrap, 1-2 sentences per paragraph
  • UI labels: Sentence case (not Title Case)

Key Insights

  1. Power user - ~70 sessions/day when active, deeply integrated into workflow

  2. Backend-focused - 80%+ of work is Go backend code in core/server

  3. AI feature development - Nearly a third of tracked tasks relate to AI/LLM work

  4. Read-first workflow - Consistently read code before editing (most common tool pair)

  5. Afternoon productivity - Peak coding happens 14:00-17:00

  6. High completion rate - 82% task completion shows follow-through on tracked work

  7. Opus preference - Switched from Sonnet to Opus in late November, now almost exclusively Opus

  8. Heavy caching - 5.95B cache read tokens vs 3.91M input tokens shows efficient context reuse

  9. Mid-week momentum - Wednesday is consistently the most productive day

  10. Long sessions common - 59% of conversations have 50+ turns, indicating sustained development work


Example Prompts

A curated selection of prompts that illustrate how Claude Code is used day-to-day.

Teaching Claude About the Project

Training Claude to work better in your specific environment:

"You (Claude) quite frequently try running go build ./... like this... But I have a tmp directory which contains random go files that will fail compilation that ends up confusing you. I want to add a memory to encourage claude to target specific packages rather than using ./... in this project."

"Please read: https://code.claude.com/docs/en/memory then try adding the memory again"

"You should provide a sub-agent with details of this specific error and ask the sub-agent to read both the existing guidance and app/ai/docs/ai-claude-45-guidance.md to then decide where is most appropriate in the guidance to modify, and what that modification should look like."

Architecture & System Understanding

Asking Claude to understand systems before making changes:

"I want to consider architectural improvements I can make to app/ai/telemetry and specifically the agent that executes telemetry queries. Can you familiarise yourself with the system and tell me when you're happy with how it works."

"Can you read this agent, then read the surrounding documentation, and tell me when you understand how it works."

"Can you look at how we setup the prompt in app/ai/investigate and diagnose the reason for all the differences we see between these two prompts?"

"I want to consider the structure of the models right now. We have separate types for what we load from YAML vs what we use in the app."

Debugging & Investigation

Diagnosing issues with specific data and traces:

"I have an investigation that appears to have gone much worse on a redo than it did before. I worry this is caused by either caching of Anthropic model prompts or introducing structured outputs."

"Can you use ./bin/telemetry ask to query the local tempo for the trace? The trace ID is 94013e870291bc728b4787f1445cc836"

"I think something in our cost calculations may be going wrong as I'm seeing a cost breakdown in our dashboard admin UIs that gives negative values for some of the prompts, presumably where caching has come into play."

"Can you double check by grepping for the incident names in the backtest data to see which spans had access to which that this corroborates what you think is this bug?"

Iterative Development & Review

Working through implementations with validation:

"Can you re-read this plan and check it makes sense? It might be worth asking a sub-agent to review it without our context so it can notice where things aren't explained enough?"

"Can you double check this hasn't modified code in ways that will cause regressions?"

"Can you go back through the downloaded queries and consider broadening this eval suite if you find any interesting examples that are worth adding? I don't want to go overboard, the goal is get good coverage not end up with a huge eval suite"

"I'm ready to commit this and finish work on loki log queries now. I want to check the app/ai/docs/ai-telemetry-query-datasets.md is a useful runbook for exactly the process that we just ran."

Design Decisions

Discussing trade-offs and making choices:

"Let's keep citations as a result type from the telemetry agent, but we add another field 'Discards' which tracks all the other queries we made with a reason they weren't useful. Then let's adjust what we store on the telemetry query to be Outcome: 'unset' | 'cited' | 'discarded' with OutcomeReason null.String, and the telemetry agent tool can update the queries with the relevant outcome."

"I like option 1 but I'd like to consider extracting the value using json path queries into the scorecard json which should be much more efficient"

"It would be much better if adding a model requires just modifying the registry.yaml and running go generate ./app/ai/model to produce a new models.gen.go file that provides this const mapping."

Backtest Analysis (AI Feature Work)

Specialized prompts for evaluating AI system performance:

"Can you double check that the incident conversations appeared around the heads-up message in the prompt broadly similarly between real and backtest?"

"For each investigation, can you check what the initial telemetry query returned in the baseline and compare it to what the same check did in the latest? I want to figure out if telemetry actually varied between the runs"

"Create an attribution chain for investigation 13-inc-16895-01. Trace how the wrong conclusion propagated through the investigation checks."

"Can you use parallel sub-agents to double check that we've got this correct for the investigations mentioned in the CheckPause report?"

Code Style & Conventions

Enforcing consistency:

"In general, we like to order files so that function definitions follow their usage, so that you can read top to bottom."

"Please make sure we use sentence case for all our metrics"

"Can you commit this as 'Clear suggestion after accept' with a well formed commit message with the table above and explanation"

Sub-Agent Orchestration

Delegating work to sub-agents for context isolation and parallel execution:

"Can you use parallel sub-agents to double check that we've got this correct for the investigations mentioned in the CheckPause report?"

"Ok, I think we have enough to go through and analyse each of the heads-up messages now. Do it in parallel sub-agents with 25 agents running concurrently."

"Yes, I would like you to create attribution chains for all the other relevant investigations. Please do this in parallel, then after that I would like to cluster those attribution chains to understand where in the investigation system there are problems (e.g. in what specific check) and then collate an error report for each of those places so that we can consider how to fix each of them in code."

"Can you create a sub-agent for each of the different telemetry query types to check what structure they would naturally want to save into this data model so we can confirm what is truly universal and can be shared and what is different."

"Can you look into the telemetry check and run the same audit as we did before against all the new types of query we support? I recommend indexing the check first then getting sub-agents to do the work"

Using sub-agents to isolate context or get fresh perspectives:

"Can you re-read this plan and check it makes sense? It might be worth asking a sub-agent to review it without our context so it can notice where things aren't explained enough?"

"Can you get another sub-agent with minimal context to review..."

"I want to create some instructions in playbooks about creating an attribution chain like this. Can you consider where to best add them, perhaps in a sub-agent to avoid polluting this context?"

"You should provide a sub-agent with details of this specific error and ask the sub-agent to read both the existing guidance and app/ai/docs/ai-claude-45-guidance.md to then decide where is most appropriate in the guidance to modify, and what that modification should look like."

Sub-agents for exploration and research:

"Ok, I've made a much larger download now and I want you to ask a sub-agent to try exploring it to categorise all the different queries into specific types of failure. I want that sub-agent to try achieving this task and then reflect on whether the structure helped it do this or if we should make improvements."

"Can you have a sub-agent look into the investigation system and find all the other places that we could've protected against leaning too hard on this? We could do it at the hypothesis step, or when building findings, or in many other places I expect, and I want to list all of them and then consider the relative merits of putting this guidance in each and what the guidance might look like."

Multi-Step Pipelining

Chaining operations with explicit sequencing:

"This will be a trial run where we can figure out how to do this analysis, then capture the process that we found was effective into the plan file, before we start-up multiple sub-agents to deal with each heads-up message in parallel."

"Can you consider how we modify the prompt given this new understanding of heads-up messages? I expect formulating a statement of intent about the value proposition of heads-up messages that I can review is a good first step, then we can consider how the value prop might cause us to modify the prompt."

"Can you pick one of these investigations and do this analysis and then present the chain back to me? We can iterate on the right format for the chain before we start on the other investigations."

"Can you find an example investigation that went wrong in its final analysis due to the code agent producing an invalid result or incorrect assumption, then we can look specifically at the chain of events and the output of the code agent that led to this."

"Let's keep citations as a result type from the telemetry agent, but we add another field 'Discards' which tracks all the other queries we made with a reason they weren't useful. Then let's adjust what we store on the telemetry query to be Outcome: 'unset' | 'cited' | 'discarded' with OutcomeReason null.String, and the telemetry agent tool can update the queries with the relevant outcome."

Eval-driven iteration with sub-agents:

"I want you to go through each of the eval cases in turn and ensure they are performing appropriately. Run the eval in a sub-agent with --repeat 3 and --print-all-actuals so you can see how things are varying, then tell me if you see problems and how you recommend we solve them."

"Can we look into some of the query variability in these evals? Let's have sub-agents run with repeat 3 for the cases that are flipping the query around and see if there are patterns about why it's flipping between query plans"

"This sounds good. Can we look at the eval cases that are failing on the query check to see if we can make them more stable without losing the purpose of what they're testing? I advise using sub-agents with --repeat for each of the evals going wrong."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment