README.md
paperclip tool- paperclip_cost_audit.py
Instructions:
pip install psycopg2-binary
python paperclip_cost_audit.py
I've been running a pattern called /probe against AI-generated code before I write anything, and it keeps catching bugs the AI had no idea it was about to cause.
The shape is simple. Before I write code based on AI output, I force each AI-asserted fact into a numbered CLAIM with an EXPECTED value, then run a command against the real system to check. I capture the delta. Surprises become tests.
The core move: claims are the AI's own prior confidence, made auditable.
Most AI tool integrations follow the same pattern: fetch raw data, paste it into the prompt, ask the model to analyze it. A CSV with 500 rows. A database dump. An API response with nested JSON. The LLM reads every byte, burns tokens parsing structure it cannot see efficiently, and produces a summary that a three-line script could have generated.
This approach is expensive, slow, and lossy:
Analyze trading snapshots from CSV to calculate P&L, win rate, and expected value metrics (Standard EV, Kelly Criterion, Sharpe Ratio) aggregated by ISO week. Pure functional core for calculations with console output.
test to make public