Skip to content

Instantly share code, notes, and snippets.

@saadshahd
Last active August 12, 2025 23:26
Show Gist options
  • Save saadshahd/3bbbbb4f185a04a9fd006326fa123366 to your computer and use it in GitHub Desktop.
Save saadshahd/3bbbbb4f185a04a9fd006326fa123366 to your computer and use it in GitHub Desktop.
Cognitive Load Analyzer for Code Reviews - claude code statusline

Cognitive Load Status Line for Claude Code

Real-time cognitive load analyzer that shows how complex your current changes are in Claude Code's status bar.

What It Shows

CL: 🟢 Low (3 chunks) | Ready for review
CL: 🟡 Medium (7 chunks) | Split: async + rest  
CL: 🔴 High (12 chunks) | ⚠️ Complexity: 20 (critical) - Refactor before review

Thresholds:

  • 🟢 1-4 chunks: Single mental load
  • 🟡 5-8 chunks: Context switching required
  • 🔴 9+ chunks: Cognitive overload

Installation

1. Install Dependencies

# macOS
brew install ripgrep scc jq git lizard-analyzer

# Linux
apt install ripgrep scc jq git
pip install lizard  # optional

2. Download Script

curl -o ~/.claude/statusline-cognitive-load.sh \
  https://gist.githubusercontent.com/YOUR_USERNAME/GIST_ID/raw/statusline-cognitive-load.sh
chmod +x ~/.claude/statusline-cognitive-load.sh

3. Configure Claude Code

Add to your Claude Code settings:

{
  "status_line": {
    "type": "command",
    "command": "/opt/homebrew/bin/bash ~/.claude/statusline-cognitive-load.sh"
    "padding": 0 
  }
}

How It Works

Analyzes your uncommitted changes and calculates cognitive load based on:

  • Complexity: Cyclomatic complexity of functions
  • File spread: Number of files modified
  • Abstractions: New classes/interfaces/types
  • Breaking changes: Removed public APIs
  • Async operations: Promises, async/await usage
  • Hot files: Recently changed files (high churn)

Features

  • Smart caching: 500ms execution budget with 24-hour cache
  • Debouncing: Updates max once per minute
  • Session-aware: Separate analysis per Claude Code session
  • Language support: JS/TS, Python, Go, Java, C/C++, Rust, Ruby, PHP, C#

Debug Mode

Enable debug output:

DEBUG=1 echo '{"session_id": "test", "workspace": {"current_dir": "."}}' | \
  ~/.claude/statusline-cognitive-load.sh

License

MIT

#!/opt/homebrew/bin/bash
# Enhanced Cognitive Load Analyzer v3.3 - Improved Output
# Dependencies: ripgrep, scc, jq, git, lizard (optional)
set -euo pipefail
# ================== CONFIGURATION ==================
readonly CACHE_BASE="${HOME}/.cache/cognitive-load"
readonly MAX_EXECUTION_TIME=0.5 # 500ms budget for Claude Code
readonly CACHE_EXPIRY=86400 # 24 hours
# Color codes (standard ANSI)
readonly COLOR_GREEN='\033[32m'
readonly COLOR_YELLOW='\033[33m'
readonly COLOR_RED='\033[31m'
readonly COLOR_RESET='\033[0m'
# DEBUG=1
# ================== INITIALIZATION ==================
# Parse Claude Code input
if [ -t 0 ]; then
echo "Error: This script is designed for Claude Code status line only"
exit 1
fi
claude_input=$(cat)
SESSION_ID=$(echo "$claude_input" | jq -r '.session_id // "default"')
CURRENT_DIR=$(echo "$claude_input" | jq -r '.workspace.current_dir // "."')
MODEL_NAME=$(echo "$claude_input" | jq -r '.model.display_name // ""')
# Session-specific cache
readonly CACHE_DIR="${CACHE_BASE}/${SESSION_ID}"
mkdir -p "$CACHE_DIR"
# Clean old session caches (keep current session + last 24h)
find "$CACHE_BASE" -maxdepth 1 -type d -mtime +1 ! -path "$CACHE_BASE" ! -path "$CACHE_DIR" -exec rm -rf {} + 2>/dev/null || true
# Debounce - run max once per 2 minutes
readonly DEBOUNCE_FILE="${CACHE_DIR}/last_run"
if [ -f "$DEBOUNCE_FILE" ]; then
last_run=$(cat "$DEBOUNCE_FILE")
now=$(date +%s)
if [ $((now - last_run)) -lt 60 ]; then
# Use cached result
cached_result="${CACHE_DIR}/last_result"
[ -f "$cached_result" ] && cat "$cached_result" && exit 0
fi
fi
# ================== DEPENDENCY CHECK ==================
check_dependencies() {
local missing=()
for cmd in rg scc jq git; do
if ! command -v "$cmd" >/dev/null 2>&1; then
missing+=("$cmd")
fi
done
if [ ${#missing[@]} -gt 0 ]; then
echo "CL: ⚠️ Missing: ${missing[*]}"
exit 1
fi
}
check_dependencies
# Check optional tools
USE_LIZARD=false
command -v lizard >/dev/null 2>&1 && USE_LIZARD=true
# ================== HELPER FUNCTIONS ==================
# Cache management
get_cache_key() {
echo "$1" | md5sum 2>/dev/null | cut -d' ' -f1 || \
echo "$1" | md5 2>/dev/null # macOS fallback
}
# Strip newlines when reading from cache
get_cached() {
local key="$1"
local cache_file="${CACHE_DIR}/$(get_cache_key "$key")"
if [ -f "$cache_file" ]; then
local file_age
if stat -f %m "$cache_file" >/dev/null 2>&1; then
# macOS
file_age=$(($(date +%s) - $(stat -f %m "$cache_file")))
else
# Linux
file_age=$(($(date +%s) - $(stat -c %Y "$cache_file")))
fi
if [ "$file_age" -lt "$CACHE_EXPIRY" ]; then
cat "$cache_file"
return 0
fi
fi
return 1
}
set_cached() {
local key="$1"
local value="$2"
echo "$value" > "${CACHE_DIR}/$(get_cache_key "$key")"
}
# ================== GIT ANALYSIS ==================
cd "$CURRENT_DIR" || exit 1
# Quick git check
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "CL: 📁 No git"
exit 0
fi
# Get base branch efficiently
get_base_branch() {
if git show-ref --quiet refs/remotes/origin/main 2>/dev/null; then
echo "origin/main"
elif git show-ref --quiet refs/remotes/origin/master 2>/dev/null; then
echo "origin/master"
else
echo "HEAD~1" # Fallback to parent commit
fi
}
BASE_BRANCH=$(get_base_branch)
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "HEAD")
# ================== FAST DIFF COLLECTION ==================
# Get a hash of staged changes for cache key
get_staged_hash() {
git diff --cached --name-status 2>/dev/null | md5sum 2>/dev/null | cut -d' ' -f1 || \
git diff --cached --name-status 2>/dev/null | md5 2>/dev/null || \
echo "no-staged"
}
# Include staged hash in cache key to invalidate when staging changes
cache_key="diffs:${BASE_BRANCH}:${CURRENT_BRANCH}:$(git rev-parse HEAD 2>/dev/null || echo 'working'):$(get_staged_hash)"
# Debug output
if [ -n "${DEBUG:-}" ]; then
echo "Cache key: $cache_key" >&2
echo "BASE_BRANCH: $BASE_BRANCH" >&2
fi
full_diff=""
if ! full_diff=$(get_cached "$cache_key"); then
# Collect all diffs - don't timeout, this is critical!
commit_diff=$(git diff --no-ext-diff "${BASE_BRANCH}..HEAD" 2>/dev/null || echo "")
staged_diff=$(git diff --no-ext-diff --cached 2>/dev/null || echo "")
unstaged_diff=$(git diff --no-ext-diff 2>/dev/null || echo "")
# Combine all diffs
full_diff="${commit_diff}
${staged_diff}
${unstaged_diff}"
# Debug output
if [ -n "${DEBUG:-}" ]; then
echo "Commit diff lines: $(echo "$commit_diff" | wc -l)" >&2
echo "Staged diff lines: $(echo "$staged_diff" | wc -l)" >&2
echo "Unstaged diff lines: $(echo "$unstaged_diff" | wc -l)" >&2
echo "Full diff lines: $(echo "$full_diff" | wc -l)" >&2
fi
# Only cache if we got actual content
if [ -n "$full_diff" ] && [ "$(echo "$full_diff" | wc -l)" -gt 1 ]; then
set_cached "$cache_key" "$full_diff"
fi
fi
# Better empty check with proper line counting
diff_lines=$(echo "$full_diff" | grep '^[+-]' | wc -l)
if [ "$diff_lines" -lt 2 ]; then
# Double-check with file count
file_count=$(git diff --name-only --cached 2>/dev/null | wc -l)
if [ "$file_count" -gt 0 ]; then
echo -e "${COLOR_RED}CL: ⚠️ Diff collection failed${COLOR_RESET}"
# Force clear bad cache
rm -f "${CACHE_DIR}/$(get_cache_key "$cache_key")"
else
echo -e "CL: 🟢 Clean | No changes"
fi
exit 0
fi
# Get file list
all_files=$(mktemp)
trap "rm -f $all_files" EXIT
bash -c "
{
git diff --name-only ${BASE_BRANCH}..HEAD 2>/dev/null
git diff --name-only --cached 2>/dev/null
git diff --name-only 2>/dev/null
git ls-files --others --exclude-standard 2>/dev/null
} | sort -u | grep -v '^$'
" > "$all_files"
file_count=$(wc -l < "$all_files" | tr -d ' ')
[ "$file_count" -eq 0 ] && echo -e "CL: 🟢 Clean | No changes" && exit 0
# ================== SCORING SYSTEM ==================
declare -A weights=(
["abstractions"]=0.5
["functions"]=0.5
["breaking_changes"]=2.0
["async"]=1.5
["error_handling"]=1.2
["nested_logic"]=2.0
["domain"]=1.5
["imports"]=0.5
["side_effects"]=1.8
["schema"]=1.0
["components"]=1.0
["file_spread"]=1.5
["new_framework"]=3.0
["complexity"]=2.0
["duplication"]=2.5
["churn"]=1.8
["semantic"]=1.5
)
declare -A scores
total_weighted_score=0
complexity=0 # Global variable for complexity score
add_score() {
local category="$1"
local points="$2"
local weight="${weights[$category]:-1.0}"
scores["$category"]="$points"
local weighted=$(echo "scale=0; $points * $weight / 1" | bc)
total_weighted_score=$((total_weighted_score + weighted))
}
# ================== PATTERN ANALYSIS (FAST) ==================
# Use ripgrep with single pass for all patterns
analyze_patterns() {
local diff="$1"
local abstractions=$(echo "$diff" | rg -c '^\+\s*export\s+(class|interface|type)\s+' 2>/dev/null || echo 0)
local functions=$(echo "$diff" | rg -c '^\+.*(function\s+|const\s+\w+\s*=.*=>|async\s+function)' 2>/dev/null || echo 0)
local breaking=$(echo "$diff" | rg -c '^\-\s*(export|public|pub)\s+' 2>/dev/null || echo 0)
local async_count=$(echo "$diff" | rg -c '^\+.*(async|await|Promise|Observable)' 2>/dev/null || echo 0)
local errors=$(echo "$diff" | rg -c '^\+.*(try|catch|throw|Result|Option|Either|Error)' 2>/dev/null || echo 0)
local nested=$(echo "$diff" | rg -c '^\+.*if.*if|^\+.*(for|while).*(for|while)' 2>/dev/null || echo 0)
local domain=$(echo "$diff" | rg -c '^\+.*(validate|calculate|process|transform|authorize)' 2>/dev/null || echo 0)
local imports=$(echo "$diff" | rg -c '^\+\s*(import\s|from\s|require\()' 2>/dev/null || echo 0)
local io=$(echo "$diff" | rg -c '^\+.*(fetch|axios|http|database|query|readFile|writeFile)' 2>/dev/null || echo 0)
local schema=$(echo "$diff" | rg -c '^\+.*(CREATE TABLE|ALTER TABLE|migration|@Entity)' 2>/dev/null || echo 0)
# Debug output
if [ -n "${DEBUG:-}" ]; then
echo "Direct counts - imports: $imports, functions: $functions, async: $async_count" >&2
fi
# Score based on counts
if [ "$abstractions" -gt 10 ]; then
add_score "abstractions" 10
elif [ "$abstractions" -gt 5 ]; then
add_score "abstractions" 7
elif [ "$abstractions" -gt 0 ]; then
add_score "abstractions" "$abstractions"
fi
if [ "$functions" -gt 20 ]; then
add_score "functions" 3
elif [ "$functions" -gt 6 ]; then
add_score "functions" 2
elif [ "$functions" -gt 0 ]; then
add_score "functions" 1
fi
[ "$breaking" -gt 0 ] && add_score "breaking_changes" 2
[ "$async_count" -gt 0 ] && add_score "async" 2
if [ "$errors" -gt 2 ]; then
add_score "error_handling" 2
elif [ "$errors" -gt 0 ]; then
add_score "error_handling" 1
fi
[ "$nested" -gt 0 ] && add_score "nested_logic" 2
if [ "$domain" -gt 5 ]; then
add_score "domain" 2
elif [ "$domain" -gt 0 ]; then
add_score "domain" 1
fi
if [ "$imports" -gt 8 ]; then
add_score "imports" 2
elif [ "$imports" -gt 3 ]; then
add_score "imports" 1
fi
if [ "$io" -gt 5 ]; then
add_score "side_effects" 2
elif [ "$io" -gt 0 ]; then
add_score "side_effects" 1
fi
[ "$schema" -gt 0 ] && add_score "schema" 2
}
analyze_patterns "$full_diff"
# File spread score
if [ "$file_count" -gt 20 ]; then
add_score "file_spread" 3
elif [ "$file_count" -gt 10 ]; then
add_score "file_spread" 2
elif [ "$file_count" -gt 5 ]; then
add_score "file_spread" 1
fi
# ================== COMPLEXITY ANALYSIS ==================
if [ "$USE_LIZARD" = true ]; then
# Get modified source files (limit to 100 for performance)
source_files=$(grep -E '\.(js|ts|jsx|tsx|py|go|java|c|cpp|rs|rb|php|cs)$' "$all_files" | sort -R | head -100 | grep -v '^$')
if [ -n "$source_files" ]; then
# Include file modification times in cache key for proper invalidation
complexity_cache_key="complexity:$(echo "$source_files" | while read -r f; do
[ -f "$f" ] && stat -f "%m" "$f" 2>/dev/null || stat -c "%Y" "$f" 2>/dev/null || echo "0"
done | md5sum 2>/dev/null | cut -d' ' -f1 || echo "default")"
complexity=$(get_cached "$complexity_cache_key" | tr -d '\n') || {
# Run lizard and parse output more carefully
complexity=$(echo "$source_files" | xargs lizard --CCN 1 2>/dev/null | \
awk '
/^-+$/ { in_functions=1; next }
/^[0-9]+ file/ { exit }
in_functions && /^[[:space:]]*[0-9]+/ {
ccn=$2
if (ccn+0 == ccn) {
sum += ccn
count++
}
}
END {
if (count > 0)
print int(sum/count)
else
print 0
}
' || echo 0)
# Validate and sanitize
complexity=$(echo "$complexity" | tr -d '\n' | tr -d ' ')
if ! [[ "$complexity" =~ ^[0-9]+$ ]]; then
complexity=0
fi
set_cached "$complexity_cache_key" "$complexity"
}
# Debug output
if [ -n "${DEBUG:-}" ]; then
echo "Analyzed $(echo "$source_files" | grep -c .) files, avg complexity: $complexity" >&2
fi
# Score based on average cyclomatic complexity
if [[ "$complexity" =~ ^[0-9]+$ ]]; then
if [ "$complexity" -gt 15 ]; then
add_score "complexity" 3 # High complexity
elif [ "$complexity" -gt 10 ]; then
add_score "complexity" 2 # Medium complexity
elif [ "$complexity" -gt 5 ]; then
add_score "complexity" 1 # Low complexity
fi
fi
fi
elif [ "$file_count" -gt 100 ]; then
# For very large changesets, estimate based on diff patterns
complex_patterns=$(echo "$full_diff" | rg -c '^\+.*(if|for|while|switch|case|catch)' 2>/dev/null || echo 0)
if [ "$complex_patterns" -gt 50 ]; then
add_score "complexity" 3
elif [ "$complex_patterns" -gt 20 ]; then
add_score "complexity" 2
elif [ "$complex_patterns" -gt 10 ]; then
add_score "complexity" 1
fi
else
# Fallback to line count estimation
line_changes=$(echo "$full_diff" | grep -c '^[+-]' || echo 0)
if [ "$line_changes" -gt 500 ]; then
add_score "complexity" 2
elif [ "$line_changes" -gt 200 ]; then
add_score "complexity" 1
fi
fi
# ================== GIT HISTORY (CACHED) ==================
# Batch git history analysis
history_cache_key="history:${BASE_BRANCH}:${CURRENT_BRANCH}"
if history_scores=$(get_cached "$history_cache_key"); then
eval "$history_scores"
else
# Check for high churn files
churn_score=0
if [ "$file_count" -le 20 ]; then
hot=0
while IFS= read -r file; do
count=$(git log --since='30 days ago' --oneline -- "$file" 2>/dev/null | wc -l)
[ "$count" -gt 10 ] && ((hot++))
done < "$all_files"
if [ "$hot" -gt 5 ]; then churn_score=4
elif [ "$hot" -gt 2 ]; then churn_score=2
elif [ "$hot" -gt 0 ]; then churn_score=1
fi
fi
set_cached "$history_cache_key" "add_score 'churn' $churn_score"
[ "$churn_score" -gt 0 ] && add_score "churn" "$churn_score"
fi
# Semantic complexity (quick estimate based on diff size)
semantic_score=$(echo "$full_diff" | wc -l | awk '{
if ($1 > 1000) print 5
else if ($1 > 500) print 3
else if ($1 > 200) print 2
else if ($1 > 50) print 1
else print 0
}')
[ "$semantic_score" -gt 0 ] && add_score "semantic" "$semantic_score"
# ================== FINAL CALCULATION ==================
# Calculate cognitive load chunks
final_chunks=$((total_weighted_score / 4))
[ "$final_chunks" -eq 0 ] && final_chunks=1
# Find highest impact areas (top 2)
declare -a impact_areas
for category in "${!scores[@]}"; do
weighted=$(echo "${scores[$category]} * ${weights[$category]}" | bc | cut -d. -f1)
impact_areas+=("$weighted:$category")
done
# Sort and get top impacts
IFS=$'\n' sorted_impacts=($(printf '%s\n' "${impact_areas[@]}" | sort -rn))
top_impact=$(echo "${sorted_impacts[0]}" | cut -d: -f2)
second_impact=""
[ ${#sorted_impacts[@]} -gt 1 ] && second_impact=$(echo "${sorted_impacts[1]}" | cut -d: -f2)
# ================== ENHANCED OUTPUT ==================
# Generate detailed suggestion based on impact areas
get_detailed_suggestion() {
local chunks="$1"
local primary="$2"
local secondary="$3"
# Check if we have extreme complexity (from lizard analysis)
if [ -n "$complexity" ] && [ "$complexity" -gt 15 ]; then
if [ "$chunks" -gt 8 ]; then
echo "⚠️ Complexity: ${complexity} (critical) - Refactor before review"
elif [ "$chunks" -gt 4 ]; then
echo "Complexity: ${complexity} (high) - Split complex functions"
else
echo "Complexity: ${complexity} - Add tests & docs"
fi
return
fi
if [ "$chunks" -le 4 ]; then
echo "Ready for review"
elif [ "$chunks" -le 8 ]; then
case "$primary" in
churn) echo "Hot files - careful review" ;;
complexity) echo "Complex logic - add tests" ;;
breaking_changes) echo "Breaking changes - check deps" ;;
schema) echo "DB changes - review migrations" ;;
file_spread) echo "Split: $([ -n "$secondary" ] && echo "$secondary + rest" || echo "core + features")" ;;
async) echo "Async complexity - test edge cases" ;;
nested_logic) echo "Nested logic - refactor suggested" ;;
error_handling) echo "Error paths - verify handling" ;;
side_effects) echo "I/O heavy - check performance" ;;
*) echo "Consider splitting large areas" ;;
esac
else
# High cognitive load - suggest specific splits
local split_count=$((chunks / 4))
[ "$split_count" -lt 2 ] && split_count=2
[ "$split_count" -gt 5 ] && split_count="4-5"
case "$primary" in
file_spread) echo "Break into $split_count PRs by feature" ;;
complexity) echo "⚠️ Refactor complex code first" ;;
schema) echo "Separate schema from logic changes" ;;
breaking_changes) echo "Isolate breaking changes first" ;;
semantic) echo "${file_count} files - Split by domain" ;;
async) echo "Too much async - Split by flow" ;;
*) echo "Break into $split_count focused PRs" ;;
esac
fi
}
# Determine load level and color
if [ "$final_chunks" -le 4 ]; then
indicator="🟢"
level="Low"
level_desc="Single mental load"
elif [ "$final_chunks" -le 8 ]; then
indicator="🟡"
level="Medium"
level_desc="Context switching"
else
indicator="🔴"
level="High"
level_desc="Overload"
fi
# Generate suggestion
suggestion=$(get_detailed_suggestion "$final_chunks" "$top_impact" "$second_impact")
# Format final output
output="CL: ${indicator} ${level} (${final_chunks} chunks) | ${suggestion}"
# Save timestamp and result
date +%s > "$DEBOUNCE_FILE"
echo "$output" | tee "${CACHE_DIR}/last_result"
exit 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment