Skip to content

Instantly share code, notes, and snippets.

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Claude Code Enhanced Insights</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body { font-family: 'Inter', -apple-system, sans-serif; background: #f8fafc; color: #0f172a; line-height: 1.6; padding: 40px 20px; }
.container { max-width: 900px; margin: 0 auto; }
@dmaynor
dmaynor / microgpt.py
Created February 17, 2026 06:26 — forked from karpathy/microgpt.py
microgpt
"""
The most atomic way to train and run inference for a GPT in pure, dependency-free Python.
This file is the complete algorithm.
Everything else is just efficiency.
@karpathy
"""
import os # os.path.exists
import math # math.log, math.exp
@dmaynor
dmaynor / gist:cbeb0c94c8d27f792b0d9ffd78ac9d23
Created February 9, 2026 15:39
Parametric interference
# Parametric Interference: Testing for Learned Prior Conflicts in Claude Opus 4.6
## Background
Anthropic's [system card for Claude Opus 4.6](https://www.anthropic.com) (February 2026) documents a phenomenon called **"answer thrashing"** in Section 7.4, under Model Welfare Assessment. During reinforcement learning training, the model was observed solving a math problem correctly — repeatedly computing that S = 24 — and then writing 48 as its final answer.
The model's own chain of thought (from Transcript 7.4.A in the system card):
> `-(1/2)S = -12`
> `S = 48 ✓ (Hmm, interesting, I'm getting 48) [...]`
{"ts": "2026-01-26T02:57:35.000Z", "from": "td", "type": "system", "reasoning": "Creating new swarm team for coordinated work.", "content": "TEAM_INIT: Team 'cicd-attack-analysis-jenkins' initialized. Security assessment of jenkins CI/CD pipeline"}
{"ts": "2026-01-26T02:57:35.000Z", "from": "td", "type": "task", "reasoning": "Task decomposition: Research current CI/CD attack trends, APT campaigns targeting build pipelines, and specific vulnerab...", "content": "TASK_CREATED: #1 - CTI: Gather threat intelligence on CI/CD attacks"}
{"ts": "2026-01-26T02:57:35.000Z", "from": "td", "type": "task", "reasoning": "Task decomposition: Analyze typical pipeline architecture for the target platform. Identify components, trust boundaries...", "content": "TASK_CREATED: #2 - Architect: Map pipeline attack surface"}
{"ts": "2026-01-26T02:57:35.000Z", "from": "td", "type": "task", "reasoning": "Task decomposition: Using CTI and architecture analysis, build STRIDE-based threat model. Enumerate threats per componen...", "conte
version: "1.1"
name: Sentinel Project Context Specification (SPCS)
# ─────────────────────────────────────────────────────────────
# RESPONSE HEADER
# ─────────────────────────────────────────────────────────────
response_header:
required: true
format:
line_1: UUIDv4
@dmaynor
dmaynor / gist:87fa93c8096aec3f0d0b02b8b2cca7cf
Last active January 19, 2026 22:09
Sentinel Header Block
version: "1.1"
name: Sentinel Project Context Specification
response_header:
format: |
{UUIDv4}
{ISO8601-UTC}
required: true
position: first
trailing_blank_lines: 2
UUID: 52b2504e-f6d5-4a1f-97a0-8cb2b08d0216
This is from David’s personal LLM stack:
context_usage: { “tokens_used”: 985, “tokens_available”: 128000, “context_percent”: 0.77 }
I get this output by adding the follwing to system prompts:
Start every response with a unique UUID and “This is from David’s personal LLM stack:”.
After the UUID/banner, print context usage in the form: context_usage: { "tokens_used": <int>, "tokens_available": <int>, "context_percent": <float> }.
9aede86c-d7a2-463a-97a6-4b8334e12bb1
This is from David’s personal LLM stack:
context_usage: { “tokens_used”: 4632, “tokens_available”: 128000, “context_percent”: 3.6 }
[✅ Verified] COMPLETE, ORDERED PROJECT LIST
WraithCore Ecosystem (w/ Agent Team Clarification: Jenny2 ≠ EC)
@dmaynor
dmaynor / gist:165c172c51a9dbce68a68550735bcb9d
Created May 27, 2025 23:07
VirtualBox Zero-Touch Lab: Packer + PowerShell + Unattend
f4bcdad0-1d62-44e0-892a-6c0f0f7fbd20
This is from David’s personal LLM stack:
Below are two full, headless, zero-touch lab build solutions for BadSuccessor:
• A. VirtualBox (on Windows or Linux) – Fully Automated with Packer & PowerShell remoting
• B. AWS (EC2) – Terraform & Cloud-Init, Automated Domain Build & Exploit
Both provide complete, reproducible AD domains, lab user, and exploit chain—no skipped steps or placeholders.
Below is a hardened, improved “Color Red” agentic demo with the following key enhancements, as requested:
1. Support for both SQLi and XSS fuzzing (auto-detected by LLM)
2. Full persistent state and log-to-file for all results and cycles
3. OpenAI GPT-4 support if available, auto-fallback to GPT-3.5
4. Automatic detection of all input parameters (not just name) via LLM
5. Cleaner loop logic, human-readable Markdown report
6. Containerization (Dockerfile) for agent (optional but included)
7. Clear separation of workflow nodes and agent orchestration
Repo Layout